Search results for: simulation optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7526

Search results for: simulation optimization

596 Analysis of the Contribution of Coastal and Marine Physical Factors to Oil Slick Movement: Case Study of Misrata, Libya

Authors: Abduladim Maitieg, Mark Johnson

Abstract:

Developing a coastal oil spill management plan for the Misratah coast is the motivating factor for building a database for coastal and marine systems and energy resources. Wind direction and speed, currents, bathymetry, coastal topography and offshore dynamics influence oil spill deposition in coastal water. Therefore, oceanographic and climatological data can be used to understand oil slick movement and potential oil deposits on shoreline area and the behaviour of oil spill trajectories on the sea surface. The purpose of this study is to investigate the effects of the coastal and marine physical factors under strong wave conditions and various bathymetric and coastal topography gradients in the western coastal area of Libya on the movement of oil slicks. The movement of oil slicks was computed using a GNOME simulation model based on current and wind speed/direction. The results in this paper show that (1) Oil slick might reach the Misratah shoreline area in two days in the summer and winter. Seasons. (2 ) The North coast of Misratah is the potential oil deposit area on the Misratah coast. (3) Tarball pollution was observed along the North coast of Misratah. (4) Two scenarios for the summer and the winter season were run, along the western coast of Libya . (5) The eastern coast is at a lower potential risk due to the influence of wind and current energy in the Gulf of Sidra. (6) The Misratah coastline is more vulnerable to oil spill movement in the summer than in winter seasons. (7) Oil slick takes from 2 to 5 days to reach the saltmarsh in the eastern Misratah coast. (8) Oil slick moves 300 km in 30 days from the spill resource location near the Libyan western border to the Misratah coast.(9) Bathymetric features have a profound effect on oil spill movement. (9)Oil dispersion simulations using GNOME are carried out taking into account high-resolution wind and current data.

Keywords: oil spill movement, coastal and marine physical factors, coast area, Libyan

Procedia PDF Downloads 215
595 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril

Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro

Abstract:

Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.

Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics

Procedia PDF Downloads 78
594 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital

Authors: Naser Zouri

Abstract:

Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.

Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance

Procedia PDF Downloads 290
593 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia

Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay

Abstract:

Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.

Keywords: AquaCrop model, calibration, validation, simulation

Procedia PDF Downloads 48
592 Desulfurization of Crude Oil Using Bacteria

Authors: Namratha Pai, K. Vasantharaj, K. Haribabu

Abstract:

Our Team is developing an innovative cost effective biological technique to desulfurize crude oil. ’Sulphur’ is found to be present in crude oil samples from .05% - 13.95% and its elimination by industrial methods is expensive currently. Materials required :- Alicyclobacillus acidoterrestrius, potato dextrose agar, oxygen, Pyragallol and inert gas(nitrogen). Method adapted and proposed:- 1) Growth of bacteria studied, energy needs. 2) Compatibility with crude-oil. 3) Reaction rate of bacteria studied and optimized. 4) Reaction development by computer simulation. 5) Simulated work tested by building the reactor. The method being developed requires the use of bacteria Alicyclobacillus acidoterrestrius - an acidothermophilic heterotrophic, soil dwelling aerobic, Sulfur bacteria. The bacteria are fed to crude oil in a unique manner. Its coated onto potato dextrose agar beads, cultured for 24 hours (growth time coincides with time when it begins reacting) and fed into the reactor. The beads are to be replenished with O2 by passing them through a jacket around the reactor which has O2 supply. The O2 can’t be supplied directly as crude oil is inflammable, hence the process. Beads are made to move around based on the concept of fluidized bed reactor. By controlling the velocity of inert gas pumped , the beads are made to settle down when exhausted of O2. It is recycled through the jacket where O2 is re-fed and beads which were inside the ring substitute the exhausted ones. Crude-oil is maintained between 1 atm-270 M Pa pressure and 45°C treated with tartaric acid (Ph reason for bacteria growth) for optimum output. Bacteria being of oxidising type react with Sulphur in crude-oil and liberate out SO4^2- and no gas. SO4^2- is absorbed into H2O. NaOH is fed once reaction is complete and beads separated. Crude-oil is thus separated of SO4^2-, thereby Sulphur, tartaric acid and other acids which are separated out. Bio-corrosion is taken care of by internal wall painting (phenolepoxy paints). Earlier methods used included use of Pseudomonas and Rhodococcus species. They were found to be inefficient, time and energy consuming and reduce the fuel value as they fed on skeleton.

Keywords: alicyclobacillus acidoterrestrius, potato dextrose agar, fluidized bed reactor principle, reaction time for bacteria, compatibility with crude oil

Procedia PDF Downloads 306
591 Carbon Sequestration Modeling in the Implementation of REDD+ Programmes in Nigeria

Authors: Oluwafemi Samuel Oyamakin

Abstract:

The forest in Nigeria is currently estimated to extend to around 9.6 million hectares, but used to expand over central and southern Nigeria decades ago. The forest estate is shrinking due to long-term human exploitation for agricultural development, fuel wood demand, uncontrolled forest harvesting and urbanization, amongst other factors, compounded by population growth in rural areas. Nigeria has lost more than 50% of its forest cover since 1990 and currently less than 10% of the country is forested. The current deforestation rate is estimated at 3.7%, which is one of the highest in the world. Reducing Emissions from Deforestation and forest Degradation plus conservation, sustainable management of forests and enhancement of forest carbon stocks constituted what is referred to as REDD+. This study evaluated some of the existing way of computing carbon stocks using eight indigenous tree species like Mansonia, Shorea, Bombax, Terminalia superba, Khaya grandifolia, Khaya senegalenses, Pines and Gmelina arborea. While these components are the essential elements of REDD+ programme, they can be brought under a broader framework of systems analysis designed to arrive at optimal solutions for future predictions through statistical distribution pattern of carbon sequestrated by various species of tree. Available data on height and diameter of trees in Ibadan were studied and their respective potentials of carbon sequestration level were assessed and subjected to tests so as to determine the best statistical distribution that would describe the carbon sequestration pattern of trees. The result of this study suggests a reasonable statistical distribution for carbons sequestered in simulation studies and hence, allow planners and government in determining resources forecast for sustainable development especially where experiments with real-life systems are infeasible. Sustainable management of forest can then be achieved by projecting future condition of forests under different management regimes thereby supporting conservation and REDD+ programmes in Nigeria.

Keywords: REDD+, carbon, climate change, height and diameter

Procedia PDF Downloads 151
590 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 141
589 Experimental Analysis of the Performance of a System for Freezing Fish Products Equipped with a Modulating Vapour Injection Scroll Compressor

Authors: Domenico Panno, Antonino D’amico, Hamed Jafargholi

Abstract:

This paper presents an experimental analysis of the performance of a system for freezing fish products equipped with a modulating vapour injection scroll compressor operating with R448A refrigerant. Freezing is a critical process for the preservation of seafood products, as it influences quality, food safety, and environmental sustainability. The use of a modulating scroll compressor with vapour injection, associated with the R448A refrigerant, is proposed as a solution to optimize the performance of the system, reducing energy consumption and mitigating the environmental impact. The steam injection modulating scroll compressor represents an advanced technology that allows you to adjust the compressor capacity based on the actual cooling needs of the system. Vapour injection allows the optimization of the refrigeration cycle, reducing the evaporation temperature and improving the overall efficiency of the system. The use of R448A refrigerant, with a low global warming potential (GWP), is part of an environmental sustainability perspective, helping to reduce the climate impact of the system. The aim of this research was to evaluate the performance of the system through a series of experiments conducted on a pilot plant for the freezing of fish products. Several operational variables were monitored and recorded, including evaporation temperature, condensation temperature, energy consumption, and freezing time of seafood products. The results of the experimental analysis highlighted the benefits deriving from the use of the modulating vapour injection scroll compressor with the R448A refrigerant. In particular, a significant reduction in energy consumption was recorded compared to conventional systems. The modulating capacity of the compressor made it possible to adapt the cold production to variations in the thermal load, ensuring optimal operation of the system and reducing energy waste. Furthermore, the use of an electronic expansion valve highlighted greater precision in the control of the evaporation temperature, with minimal deviation from the desired set point. This helped ensure better quality of the final product, reducing the risk of damage due to temperature changes and ensuring uniform freezing of the fish products. The freezing time of seafood has been significantly reduced thanks to the configuration of the entire system, allowing for faster production and greater production capacity of the plant. In conclusion, the use of a modulating vapour injection scroll compressor operating with R448A refrigerant has proven effective in improving the performance of a system for freezing fish products. This technology offers an optimal balance between energy efficiency, temperature control, and environmental sustainability, making it an advantageous choice for food industries.

Keywords: freezing, scroll compressor, energy efficiency, vapour injection

Procedia PDF Downloads 27
588 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 195
587 Design and Development of an Innovative MR Damper Based on Intelligent Active Suspension Control of a Malaysia's Model Vehicle

Authors: L. Wei Sheng, M. T. Noor Syazwanee, C. J. Carolyna, M. Amiruddin, M. Pauziah

Abstract:

This paper exhibits the alternatives towards active suspension systems revised based on the classical passive suspension system to improve comfort and handling performance. An active Magneto rheological (MR) suspension system is proposed as to explore the active based suspension system to enhance performance given its freedom to independently specify the characteristics of load carrying, handling, and ride quality. Malaysian quarter car with two degrees of freedom (2DOF) system is designed and constructed to simulate the actions of an active vehicle suspension system. The structure of a conventional twin-tube shock absorber is modified both internally and externally to comprehend with the active suspension system. The shock absorber peripheral structure is altered to enable the assembling and disassembling of the damper through a non-permanent joint whereby the stress analysis of the designed joint is simulated using Finite Element Analysis. Simulation on the internal part where an electrified copper coil of 24AWG is winded is done using Finite Element Method Magnetics to measure the magnetic flux density inside the MR damper. The primary purpose of this approach is to reduce the vibration transmitted from the effects of road surface irregularities while maintaining solid manoeuvrability. The aim of this research is to develop an intelligent control system of a consecutive damping automotive suspension system. The ride quality is improved by means of the reduction of the vertical body acceleration caused by the car body when it experiences disturbances from speed bump and random road roughness. Findings from this research are expected to enhance the quality of ride which in return can prevent the deteriorating effect of vibration on the vehicle condition as well as the passengers’ well-being.

Keywords: active suspension, FEA, magneto rheological damper, Malaysian quarter car model, vibration control

Procedia PDF Downloads 201
586 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 108
585 Quantifying the Impact of Intermittent Signal Priority given to BRT on Ridership and Climate-A Case Study of Ahmadabad

Authors: Smita Chaudhary

Abstract:

Traffic in India are observed uncontrolled, and are characterized by chaotic (not follows the lane discipline) traffic situation. Bus Rapid Transit (BRT) has emerged as a viable option to enhance transportation capacity and provide increased levels of mobility and accessibility. At present in Ahmadabad there are as many intersections which face the congestion and delay at signalized intersection due to transit (BRT) lanes. Most of the intersection in spite of being signalized is operated manually due to the conflict between BRT buses and heterogeneous traffic. Though BRTS in Ahmadabad has an exclusive lane of its own but with this comes certain limitations which Ahmadabad is facing right now. At many intersections in Ahmadabad due to these conflicts, interference, and congestion both heterogeneous traffic as well as transit buses suffer traffic delays of remarkable 3-4 minutes at each intersection which has a become an issue of great concern. There is no provision of BRT bus priority due to which existing signals have their least role to play in managing the traffic that ultimately call for manual operation. There is an immense decrement in the daily ridership of BRTS because people are finding this transit mode no more time saving in their routine, there is an immense fall in ridership ultimately leading to increased number of private vehicles, idling of vehicles at intersection cause air and noise pollution. In order to bring back these commuters’ transit facilities need to be improvised. Classified volume count survey, travel time delay survey was conducted and revised signal design was done for whole study stretch having three intersections and one roundabout, later one intersection was simulated in order to see the effect of giving priority to BRT on side street queue length and travel time for heterogeneous traffic. This paper aims at suggesting the recommendations in signal cycle, introduction of intermittent priority for transit buses, simulation of intersection in study stretch with proposed signal cycle using VISSIM in order to make this transit amenity feasible and attracting for commuters in Ahmadabad.

Keywords: BRT, priority, Ridership, Signal, VISSIM

Procedia PDF Downloads 433
584 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics

Authors: Varun K, Pramod B. Balareddy

Abstract:

Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.

Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient

Procedia PDF Downloads 241
583 Study of the Energy Efficiency of Buildings under Tropical Climate with a View to Sustainable Development: Choice of Material Adapted to the Protection of the Environment

Authors: Guarry Montrose, Ted Soubdhan

Abstract:

In the context of sustainable development and climate change, the adaptation of buildings to the climatic context in hot climates is a necessity if we want to improve living conditions in housing and reduce the risks to the health and productivity of occupants due to thermal discomfort in buildings. One can find a wide variety of efficient solutions but with high costs. In developing countries, especially tropical countries, we need to appreciate a technology with a very limited cost that is affordable for everyone, energy efficient and protects the environment. Biosourced insulation is a product based on plant fibers, animal products or products from recyclable paper or clothing. Their development meets the objectives of maintaining biodiversity, reducing waste and protecting the environment. In tropical or hot countries, the aim is to protect the building from solar thermal radiation, a source of discomfort. The aim of this work is in line with the logic of energy control and environmental protection, the approach is to make the occupants of buildings comfortable, reduce their carbon dioxide emissions (CO2) and decrease their energy consumption (energy efficiency). We have chosen to study the thermo-physical properties of banana leaves and sawdust, especially their thermal conductivities, direct measurements were made using the flash method and the hot plate method. We also measured the heat flow on both sides of each sample by the hot box method. The results from these different experiences show that these materials are very efficient used as insulation. We have also conducted a building thermal simulation using banana leaves as one of the materials under Design Builder software. Air-conditioning load as well as CO2 release was used as performance indicator. When the air-conditioned building cell is protected on the roof by banana leaves and integrated into the walls with solar protection of the glazing, it saves up to 64.3% of energy and avoids 57% of CO2 emissions.

Keywords: plant fibers, tropical climates, sustainable development, waste reduction

Procedia PDF Downloads 168
582 A Mixed Integer Linear Programming Model for Container Collection

Authors: J. Van Engeland, C. Lavigne, S. De Jaeger

Abstract:

In the light of the transition towards a more circular economy, recovery of products, parts or materials will gain in importance. Additionally, the EU proximity principle related to waste management and emissions generated by transporting large amounts of end-of-life products, shift attention to local recovery networks. The Flemish inter-communal cooperation for municipal solid waste management Meetjesland (IVM) is currently investigating the set-up of such a network. More specifically, the network encompasses the recycling of polyvinyl chloride (PVC), which is collected in separate containers. When these containers are full, a truck should transport them to the processor which can recycle the PVC into new products. This paper proposes a model to optimize the container collection. The containers are located at different Civic Amenity sites (CA sites) in a certain region. Since people can drop off their waste at these CA sites, the containers will gradually fill up during a planning horizon. If a certain container is full, it has to be collected and replaced by an empty container. The collected waste is then transported to a single processor. To perform this collection and transportation of containers, the responsible firm has a set of vehicles stationed at a single depot and different personnel crews. A vehicle can load exactly one container. If a trailer is attached to the vehicle, it can load an additional container. Each day of the planning horizon, the different crews and vehicles leave the depot to collect containers at the different sites. After loading one or two containers, the crew has to drive to the processor for unloading the waste and to pick up empty containers. Afterwards, the crew can again visit sites or it can return to the depot to end its collection work for that day. All along the collection process, the crew has to respect the opening hours of the sites. In order to allow for some flexibility, a crew is allowed to wait a certain amount of time at the gate of a site until it opens. The problem described can be modelled as a variant to the PVRP-TW (Periodic Vehicle Routing Problem with Time Windows). However, a vehicle can at maximum load two containers, hence only two subsequent site visits are possible. For that reason, we will refer to the model as a model for building tactical waste collection schemes. The goal is to a find a schedule describing which crew should visit which CA site on which day to minimize the number of trucks and the routing costs. The model was coded in IBM CPLEX Optimization studio and applied to a number of test instances. Good results were obtained, and specific suggestions concerning route and truck costs could be made. For a large range of input parameters, collection schemes using two trucks are obtained.

Keywords: container collection, crew scheduling, mixed integer linear programming, waste management

Procedia PDF Downloads 117
581 Reacting Numerical Simulation of Axisymmetric Trapped Vortex Combustors for Methane, Propane and Hydrogen

Authors: Heval Serhat Uluk, Sam M. Dakka, Kuldeep Singh, Richard Jefferson-Loveday

Abstract:

The carbon footprint of the aviation sector in total measured 3.8% in 2017, and it is expected to triple by 2050. New combustion approaches and fuel types are necessary to prevent this. This paper will focus on using propane, methane, and hydrogen as fuel replacements for kerosene and implement a trapped vortex combustor design to increase efficiency. Reacting simulations were conducted for axisymmetric trapped vortex combustor to investigate the static pressure drop, combustion efficiency and pattern factor for various cavity aspect ratios for 0.3, 0.6 and 1 and air mass flow rates for 14 m/s, 28 m/s and 42 m/s. Propane, methane and hydrogen are used as alternative fuels. The combustion model was anchored based on swirl flame configuration with an emphasis on high fidelity of boundary conditions with favorable results of eddy dissipation model implementation. Reynolds Averaged Navier Stokes (RANS) k-ε model turbulence model for the validation effort was used for turbulence modelling. A grid independence study was conducted for the three-dimensional model to reduce computational time. Preliminary results for 24 m/s air mass flow rate provided a close temperature profile inside the cavity relative to the experimental study. The investigation will be carried out on the effect of air mass flow rates and cavity aspect ratio on the combustion efficiency, pattern factor and static pressure drop in the combustor. A comparison study among pure methane, propane and hydrogen will be conducted to investigate their suitability for trapped vortex combustors and conclude their advantages and disadvantages as a fuel replacement. Therefore, the study will be one of the milestones to achieving 2050 zero carbon emissions or reducing carbon emissions.

Keywords: computational fluid dynamics, aerodynamic, aerospace, propulsion, trapped vortex combustor

Procedia PDF Downloads 77
580 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments

Authors: I. Berger, N. Machour, F. Portet-Koltalo

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.

Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments

Procedia PDF Downloads 276
579 Modelling Social Influence and Cultural Variation in Global Low-Carbon Vehicle Transitions

Authors: Hazel Pettifor, Charlie Wilson, David Mccollum, Oreane Edelenbosch

Abstract:

Vehicle purchase is a technology adoption decision that will strongly influence future energy and emission outcomes. Global integrated assessment models (IAMs) provide valuable insights into the medium and long terms effects of socio-economic development, technological change and climate policy. In this paper we present a unique and transparent approach for improving the behavioural representation of these models by incorporating social influence effects to more accurately represent consumer choice. This work draws together strong conceptual thinking and robust empirical evidence to introduce heterogeneous and interconnected consumers who vary in their aversion to new technologies. Focussing on vehicle choice, we conduct novel empirical research to parameterise consumer risk aversion and how this is shaped by social and cultural influences. We find robust evidence for social influence effects, and variation between countries as a function of cultural differences. We then formulate an approach to modelling social influence which is implementable in both simulation and optimisation-type models. We use two global integrated assessment models (IMAGE and MESSAGE) to analyse four scenarios that introduce social influence and cultural differences between regions. These scenarios allow us to explore the interactions between consumer preferences and social influence. We find that incorporating social influence effects into global models accelerates the early deployment of electric vehicles and stimulates more widespread deployment across adopter groups. Incorporating cultural variation leads to significant differences in deployment between culturally divergent regions such as the USA and China. Our analysis significantly extends the ability of global integrated assessment models to provide policy-relevant analysis grounded in real-world processes.

Keywords: behavioural realism, electric vehicles, social influence, vehicle choice

Procedia PDF Downloads 174
578 Nutrigenetic and Bioinformatic Analysis of Rice Bran Bioactives for the Treatment of Lifestyle Related Disease Diabetes and Hypertension

Authors: Md. Alauddin, Md. Ruhul Amin, Md. Omar Faruque, Muhammad Ali Siddiquee, Zakir Hossain Howlader, Mohammad Asaduzzaman

Abstract:

Diabetes and hypertension are the major lifestyle related diseases. The α-amylase and angiotensin converting enzymes (ACE) are the key enzymes that regulate diabetes and hypertension. The aim was to develop a drug for the treatment of diabetes and hypertension. The Rice Bran (RB) sample (Oryza sativa; BRRI-Dhan-84) was collected from the Bangladesh Rice Research Institute (BRRI), and rice bran proteins were isolated and hydrolyzed by hydrolyzing enzyme alcalase and trypsin. In vivo experiment suggested that rice bran bioactives has an effect on regulating the expression of several key gluconeogenesis and lipogenesis-regulating genes, such as glucose-6-phosphatase, phosphoenolpyruvate carboxykinase, and fatty acid synthase. The above genes have a connection of regulating the glucose level, lipids profile as well as act as an anti-inflammatory agent. A molecular docking, bioinformatics and in vitro experiments were performed. We found rice bran protein hydrolysates significantly (<0.05) influence the peptide concentration in the case of trypsin, alcalase, and (trypsin + alcalase) digestion. The in vitro analysis found that protein hydrolysate significantly (<0.05) reduced diabetic and hypertension as well as oxidative stress. A molecular docking study showed that the YY and IP peptide have a significantly strong binding affinity to the active site of the ACE enzyme and α-amylase with -7.8Kcal/mol and -6.2Kcal/mol, respectively. The Molecular dynamics (MD) simulation and Swiss ADME data analysis showed that less toxicity risk, good physicochemical properties, pharmacokinetics, and drug-likeness with drug scores 0.45 and 0.55 of YY and IP peptides, respectively. Thus, rice bran bioactive could be a good candidate for the treatment of diabetes and hypertension.

Keywords: anti-hypertensive and anti-hyperglycemic, anti-oxidative, bioinformatics, in vitro study, rice bran proteins and peptides

Procedia PDF Downloads 49
577 Learning to Translate by Learning to Communicate to an Entailment Classifier

Authors: Szymon Rutkowski, Tomasz Korbak

Abstract:

We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.

Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning

Procedia PDF Downloads 116
576 Surface Defect-engineered Ceo₂−x by Ultrasound Treatment for Superior Photocatalytic H₂ Production and Water Treatment

Authors: Nabil Al-Zaqri

Abstract:

Semiconductor photocatalysts with surface defects display incredible light absorption bandwidth, and these defects function as highly active sites for oxidation processes by interacting with the surface band structure. Accordingly, engineering the photocatalyst with surface oxygen vacancies will enhance the semiconductor nanostructure's photocatalytic efficiency. Herein, a CeO2₋ₓ nanostructure is designed under the influence of low-frequency ultrasonic waves to create surface oxygen vacancies. This approach enhances the photocatalytic efficiency compared to many heterostructures while keeping the intrinsiccrystal structure intact. Ultrasonic waves induce the acoustic cavitation effect leading to the dissemination of active elements on the surface, which results in vacancy formation in conjunction with larger surface area and smaller particle size. The structural analysis of CeO₂₋ₓ revealed higher crystallinity, as well as morphological optimization, and the presence of oxygen vacancies is verified through Raman, X-rayphotoelectron spectroscopy, temperature-programmed reduction, photoluminescence, and electron spinresonance analyses. Oxygen vacancies accelerate the redox cycle between Ce₄+ and Ce₃+ by prolongingphotogenerated charge recombination. The ultrasound-treated pristine CeO₂ sample achieved excellenthydrogen production showing a quantum efficiency of 1.125% and efficient organic degradation. Ourpromising findings demonstrated that ultrasonic treatment causes the formation of surface oxygenvacancies and improves photocatalytic hydrogen evolution and pollution degradation. Conclusion: Defect engineering of the ceria nanoparticles with oxygen vacancies was achieved for the first time using low-frequency ultrasound treatment. The U-CeO₂₋ₓsample showed high crystallinity, and morphological changes were observed. Due to the acoustic cavitation effect, a larger surface area and small particle size were observed. The ultrasound treatment causes particle aggregation and surface defects leading to oxygen vacancy formation. The XPS, Raman spectroscopy, PL spectroscopy, and ESR results confirm the presence of oxygen vacancies. The ultrasound-treated sample was also examined for pollutant degradation, where 1O₂was found to be the major active species. Hence, the ultrasound treatment influences efficient photocatalysts for superior hydrogen evolution and an excellent photocatalytic degradation of contaminants. The prepared nanostructure showed excellent stability and recyclability. This work could pave the way for a unique post-synthesis strategy intended for efficient photocatalytic nanostructures.

Keywords: surface defect, CeO₂₋ₓ, photocatalytic, water treatment, H₂ production

Procedia PDF Downloads 127
575 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 59
574 FEM Simulation of Tool Wear and Edge Radius Effects on Residual Stress in High Speed Machining of Inconel718

Authors: Yang Liu, Mathias Agmell, Aylin Ahadi, Jan-Eric Stahl, Jinming Zhou

Abstract:

Tool wear and tool geometry have significant effects on the residual stresses in the component produced by high-speed machining. In this paper, Coupled Eulerian and Lagrangian (CEL) model is adopted to investigate the residual stress in high-speed machining of Inconel718 with a CBN170 cutting tool. The result shows that the mesh with the smallest size of 5 um yields cutting forces and chip morphology in close agreement with the experimental data. The analysis of thermal loading and mechanical loading are performed to study the effect of segmented chip morphology on the machined surface topography and residual stress distribution. The effects of cutting edge radius and flank wear on residual stresses formation and distribution on the workpiece were also investigated. It is found that the temperature within 100um depth of the machined surface increases drastically due to the more friction heat generation with the contact area of tool and workpiece increasing when a larger edge radius and flank wear are used. With the depth further increasing, the temperature drops rapidly for all cases due to the low conductivity of Inconel718. Consequently, higher and deeper tensile residual stress is generated on the superficial. Furthermore, an increased depth of plastic deformation and compressive residual stress is noticed in the subsurface, which is attributed to the reduction of the yield strength under the thermal effect. Besides, the ploughing effect produced by a larger tool edge radius contributes more than flank wear. The magnitude variation of the compressive residual stress caused by various edge radius and flank wear have a totally opposite trend, which depends on the magnitude of the ploughing and friction pressure acting on the machined surface.

Keywords: Coupled Eulerian Lagrangian, segmented chip, residual stress, tool wear, edge radius, Inconel718

Procedia PDF Downloads 136
573 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 123
572 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport

Authors: Marcel Schneider, Nils Nießen

Abstract:

Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.

Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations

Procedia PDF Downloads 315
571 A Review of Benefit-Risk Assessment over the Product Lifecycle

Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris

Abstract:

Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.

Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches

Procedia PDF Downloads 142
570 Practical Skill Education for Doctors in Training: Economical and Efficient Methods for Students to Receive Hands-on Experience

Authors: Nathaniel Deboever, Malcolm Breeze, Adrian Sheen

Abstract:

Basic surgical and suturing techniques are a fundamental requirement for all doctors. In order to gain confidence and competence, doctors in training need to obtain sufficient teaching and just as importantly: practice. Young doctors with an apt level of expertise on these simple surgical skills, which are often used in the Emergency Department, can help alleviate some pressure during a busy evening. Unfortunately, learning these skills can be quite difficult during medical school or even during junior doctor years. The aim of this project was to adequately train medical students attending University of Sydney’s Nepean Clinical School through a series of workshops highlighting practical skills, with hopes to further extend this program to junior doctors in the hospital. The sessions instructed basic skills via tutorials, demonstrations, and lastly, the sessions cemented these proficiencies with practical sessions. During such an endeavor, it is fundamental to employ models that appropriately resemble what students will encounter in the clinical setting. The sustainability of workshops is similarly important to the continuity of such a program. To address both these challenges, the authors have developed models including suturing platforms, knot tying, and vessel ligation stations, as well as a shave and punch biopsy models and ophthalmologic foreign body device. The unique aspect of this work is that we utilized hands-on teaching sessions, to address a gap in doctors-in-training and junior doctor curriculum. Presented to you through this poster are our approaches to creating models that do not employ animal products and therefore do not necessitate particular facilities or discarding requirements. Covering numerous skills that would be beneficial to all young doctors, these models are easily replicable and affordable. This exciting work allows for countless sessions at low cost, providing enough practice for students to perform these skills confidently as it has been shown through attendee questionnaires.

Keywords: medical education, surgical models, surgical simulation, surgical skills education

Procedia PDF Downloads 138
569 Bimetallic MOFs Based Membrane for the Removal of Heavy Metal Ions from the Industrial Wastewater

Authors: Muhammad Umar Mushtaq, Muhammad Bilal Khan Niazi, Nouman Ahmad, Dooa Arif

Abstract:

Apart from organic dyes, heavy metals such as Pb, Ni, Cr, and Cu are present in textile effluent and pose a threat to humans and the environment. Many studies on removing heavy metallic ions from textile wastewater have been conducted in recent decades using metal-organic frameworks (MOFs). In this study new polyether sulfone ultrafiltration membrane, modified with Cu/Co and Cu/Zn-based bimetal-organic frameworks (MOFs), was produced. Phase inversion was used to produce the membrane, and atomic force microscopy (AFM), scanning electron microscopy (SEM) were used to characterize it. The bimetallic MOFs-based membrane structure is complex and can be comprehended using characterization techniques. The bimetallic MOF-based filtration membranes are designed to selectively adsorb specific contaminants while allowing the passage of water molecules, improving the ultrafiltration efficiency. MOFs' adsorption capacity and selectivity are enhanced by functionalizing them with particular chemical groups or incorporating them into composite membranes with other materials, such as polymers. The morphology and performance of the bimetallic MOF-based membrane were investigated regarding pure water flux and metal ion rejection. The advantages of developed bimetallic MOFs based membranes for wastewater treatment include enhanced adsorption capacity because of the presence of two metals in their structure, which provides additional binding sites for contaminants, leading to a higher adsorption capacity and more efficient removal of pollutants from wastewater. Based on the experimental findings, bimetallic MOF-based membranes are more capable of rejecting metal ions from industrial wastewater than conventional membranes that have already been developed. Furthermore, the difficulties associated with operational parameters, including pressure gradients and velocity profiles, are simulated using Ansys Fluent software. The simulation results obtained for the operating parameters are in complete agreement with the experimental results.

Keywords: bimetallic MOFs, heavy metal ions, industrial wastewater treatment, ultrafiltration.

Procedia PDF Downloads 78
568 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: database, electricity sub-meters, energy anomaly detection, sensor

Procedia PDF Downloads 293
567 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 125