Search results for: traffic measurement and modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7523

Search results for: traffic measurement and modeling

6533 Validation of Visibility Data from Road Weather Information Systems by Comparing Three Data Resources: Case Study in Ohio

Authors: Fan Ye

Abstract:

Adverse weather conditions, particularly those with low visibility, are critical to the driving tasks. However, the direct relationship between visibility distances and traffic flow/roadway safety is uncertain due to the limitation of visibility data availability. The recent growth of deployment of Road Weather Information Systems (RWIS) makes segment-specific visibility information available which can be integrated with other Intelligent Transportation System, such as automated warning system and variable speed limit, to improve mobility and safety. Before applying the RWIS visibility measurements in traffic study and operations, it is critical to validate the data. Therefore, an attempt was made in the paper to examine the validity and viability of RWIS visibility data by comparing visibility measurements among RWIS, airport weather stations, and weather information recorded by police in crash reports, based on Ohio data. The results indicated that RWIS visibility measurements were significantly different from airport visibility data in Ohio, but no conclusion regarding the reliability of RWIS visibility could be drawn in the consideration of no verified ground truth in the comparisons. It was suggested that more objective methods are needed to validate the RWIS visibility measurements, such as continuous in-field measurements associated with various weather events using calibrated visibility sensors.

Keywords: RWIS, visibility distance, low visibility, adverse weather

Procedia PDF Downloads 252
6532 Modeling User Context Using CEAR Diagram

Authors: Ravindra Dastikop, G. S. Thyagaraju, U. P. Kulkarni

Abstract:

Even though the number of context aware applications is increasing day by day along with the users, till today there is no generic programming paradigm for context aware applications. This situation could be remedied by design and developing the appropriate context modeling and programming paradigm for context aware applications. In this paper, we are proposing the static context model and metrics for validating the expressiveness and understandability of the model. The proposed context modeling is a way of describing a situation of user using context entities , attributes and relationships .The model which is an extended and hybrid version of ER model, ontology model and Graphical model is specifically meant for expressing and understanding the user situation in context aware environment. The model is useful for understanding context aware problems, preparing documentation and designing programs and databases. The model makes use of context entity attributes relationship (CEAR) diagram for representation of association between the context entities and attributes. We have identified a new set of graphical notations for improving the expressiveness and understandability of context from the end user perspective .

Keywords: user context, context entity, context entity attributes, situation, sensors, devices, relationships, actors, expressiveness, understandability

Procedia PDF Downloads 345
6531 Modeling in the Middle School: Eighth-Grade Students’ Construction of the Summer Job Problem

Authors: Neslihan Sahin Celik, Ali Eraslan

Abstract:

Mathematical model and modeling are one of the topics that have been intensively discussed in recent years. In line with the results of the PISA studies, researchers in many countries have begun to question how much students in school-education system are prepared to solve the real-world problems they encounter in their future professional lives. As a result, many mathematics educators have begun to emphasize the importance of new skills and understanding such as constructing, Hypothesizing, Describing, manipulating, predicting, working together for complex and multifaceted problems for success in beyond the school. When students increasingly face this kind of situations in their daily life, it is important to make sure that students have enough experience to work together and interpret mathematical situations that enable them to think in different ways and share their ideas with their peers. Thus, model eliciting activities are one of main tools that help students to gain experiences and the new skills required. This research study was carried on the town center of a big city located in the Black Sea region in Turkey. The participants were eighth-grade students in a middle school. After a six-week preliminary study, three students in an eighth-grade classroom were selected using criterion sampling technique and placed in a focus group. The focus group of three students was videotaped as they worked on a model eliciting activity, the Summer Job Problem. The conversation of the group was transcribed, examined with students’ written work and then qualitatively analyzed through the lens of Blum’s (1996) modeling processing cycle. The study results showed that eighth grade students can successfully work with the model eliciting, develop a model based on the two parameters and review the whole process. On the other hand, they had difficulties to relate parameters to each other and take all parameters into account to establish the model.

Keywords: middle school, modeling, mathematical modeling, summer job problem

Procedia PDF Downloads 339
6530 Comparison between Post- and Oxy-Combustion Systems in a Petroleum Refinery Unit Using Modeling and Optimization

Authors: Farooq A. Al-Sheikh, Ali Elkamel, William A. Anderson

Abstract:

A fluidized catalytic cracking unit (FCCU) is one of the effective units in many refineries. Modeling and optimization of FCCU were done by many researchers in past decades, but in this research, comparison between post- and oxy-combustion was studied in the regenerator-FCCU. Therefore, a simplified mathematical model was derived by doing mass/heat balances around both reactor and regenerator. A state space analysis was employed to show effects of the flow rates variables such as air, feed, spent catalyst, regenerated catalyst and flue gas on the output variables. The main aim of studying dynamic responses is to figure out the most influencing variables that affect both reactor/regenerator temperatures; also, finding the upper/lower limits of the influencing variables to ensure that temperatures of the reactors and regenerator work within normal operating conditions. Therefore, those values will be used as side constraints in the optimization technique to find appropriate operating regimes. The objective functions were modeled to be maximizing the energy in the reactor while minimizing the energy consumption in the regenerator. In conclusion, an oxy-combustion process can be used instead of a post-combustion one.

Keywords: FCCU modeling, optimization, oxy-combustion, post-combustion

Procedia PDF Downloads 211
6529 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement

Authors: Nadezhda Kvatashidze

Abstract:

The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.

Keywords: conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship

Procedia PDF Downloads 129
6528 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization

Authors: Taha Benarbia

Abstract:

The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metrics

Keywords: automated vehicles, connected vehicles, deep learning, smart transportation network

Procedia PDF Downloads 82
6527 Geospatial Modeling of Dry Snow Avalanches Distribution Using Geographic Information Systems and Remote Sensing: A Case Study of the Šar Mountains (Balkan Peninsula)

Authors: Uroš Durlević, Ivan Novković, Nina Čegar, Stefanija Stojković

Abstract:

Snow avalanches represent one of the most dangerous natural phenomena in mountain regions worldwide. Material and human casualties caused by snow avalanches can be very significant. In this study, using geographic information systems and remote sensing, the natural conditions of the Šar Mountains were analyzed for geospatial modeling of dry slab avalanches. For this purpose, the Fuzzy Analytic Hierarchy Process (FAHP) multi-criteria analysis method was used, within which fifteen environmental criteria were analyzed and evaluated. Based on the existing analyzes and results, it was determined that a significant area of the Šar Mountains is very highly susceptible to the occurrence of dry slab avalanches. The obtained data can be of significant use to local governments, emergency services, and other institutions that deal with natural disasters at the local level. To our best knowledge, this is one of the first research in the Republic of Serbia that uses the FAHP method for geospatial modeling of dry slab avalanches.

Keywords: GIS, FAHP, Šar Mountains, snow avalanches, environmental protection

Procedia PDF Downloads 92
6526 Hydrological Modeling and Climate Change Impact Assessment Using HBV Model, A Case Study of Karnali River Basin of Nepal

Authors: Sagar Shiwakoti, Narendra Man Shakya

Abstract:

The lumped conceptual hydrological model HBV is applied to the Karnali River Basin to estimate runoff at several gauging stations and to analyze the changes in catchment hydrology and future flood magnitude due to climate change. The performance of the model is analyzed to assess its suitability to simulate streamflow in snow fed mountainous catchments. Due to the structural complexity, the model shows difficulties in modeling low and high flows accurately at the same time. It is observed that the low flows were generally underestimated and the peaks were correctly estimated except for some sharp peaks due to isolated precipitation events. In this study, attempt has been made to evaluate the importance of snow melt discharge in the runoff regime of the basin. Quantification of contribution of snowmelt to annual, summer and winter runoff has been done. The contribution is highest at the beginning of the hot months as the accumulated snow begins to melt. Examination of this contribution under conditions of increased temperatures indicate that global warming leading to increase in average basin temperature will significantly lead to higher contributions to runoff from snowmelt. Forcing the model with the output of HadCM3 GCM and the A1B scenario downscaled to the station level show significant changes to catchment hydrology in the 2040s. It is observed that the increase in runoff is most extreme in June - July. A shift in the hydrological regime is also observed.

Keywords: hydrological modeling, HBV light, rainfall runoff modeling, snow melt, climate change

Procedia PDF Downloads 542
6525 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 155
6524 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data

Procedia PDF Downloads 423
6523 Handwriting Velocity Modeling by Artificial Neural Networks

Authors: Mohamed Aymen Slim, Afef Abdelkrim, Mohamed Benrejeb

Abstract:

The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, writing surface, etc. Therefore, it is interesting to reconstruct an experimental basis records taking, as primary reference, the writing speed for different writers which would allow studying the global system during handwriting process. This paper deals with a new approach of the handwriting system modeling based on the velocity criterion through the concepts of artificial neural networks, precisely the Radial Basis Functions (RBF) neural networks. The obtained simulation results show a satisfactory agreement between responses of the developed neural model and the experimental data for various letters and forms then the efficiency of the proposed approaches.

Keywords: Electro Myo Graphic (EMG) signals, experimental approach, handwriting process, Radial Basis Functions (RBF) neural networks, velocity modeling

Procedia PDF Downloads 441
6522 Time Dependent Biodistribution Modeling of 177Lu-DOTATOC Using Compartmental Analysis

Authors: M. Mousavi-Daramoroudi, H. Yousefnia, F. Abbasi-Davani, S. Zolghadri

Abstract:

In this study, 177Lu-DOTATOC was prepared under optimized conditions (radiochemical purity: > 99%, radionuclidic purity: > 99%). The percentage of injected dose per gram (%ID/g) was calculated for organs up to 168 h post injection. Compartmental model was applied to mathematical description of the drug behaviour in tissue at different times. The biodistribution data showed the significant excretion of the radioactivity from the kidneys. The adrenal and pancreas, as major expression sites for somatostatin receptor (SSTR), had significant uptake. A pharmacokinetic model of 177Lu-DOTATOC was presented by compartmental analysis which demonstrates the behavior of the complex.

Keywords: biodistribution, compartmental modeling, ¹⁷⁷Lu, Octreotide

Procedia PDF Downloads 221
6521 Turbulence Modeling of Source and Sink Flows

Authors: Israt Jahan Eshita

Abstract:

Flows developed between two parallel disks have many engineering applications. Two types of non-swirling flows can be generated in such a domain. One is purely source flow in disc type domain (outward flow). Other is purely sink flow in disc type domain (inward flow). This situation often appears in some turbo machinery components such as air bearings, heat exchanger, radial diffuser, vortex gyroscope, disc valves, and viscosity meters. The main goal of this paper is to show the mesh convergence, because mesh convergence saves time, and economical to run and increase the efficiency of modeling for both sink and source flow. Then flow field is resolved using a very fine mesh near-wall, using enhanced wall treatment. After that we are going to compare this flow using standard k-epsilon, RNG k-epsilon turbulence models. Lastly compare some experimental data with numerical solution for sink flow. The good agreement of numerical solution with the experimental works validates the current modeling.

Keywords: hydraulic diameter, k-epsilon model, meshes convergence, Reynolds number, RNG model, sink flow, source flow, wall y+

Procedia PDF Downloads 539
6520 An Analysis of OpenSim Graphical User Interface Effectiveness

Authors: Sina Saadati

Abstract:

OpenSim is a well-known software in biomechanical studies. There are worthy algorithms developed in this program which are used for modeling and simulation of human motions. In this research, we analyze the OpenSim application from the computer science perspective. It is important that every application have a user-friendly interface. An effective user interface can decrease the time, costs, and energy needed to learn how to use a program. In this paper, we survey the user interface of OpenSim as an important factor of the software. Finally, we infer that there are many challenges to be addressed in the development of OpenSim.

Keywords: biomechanics, computer engineering, graphical user interface, modeling and simulation, interface effectiveness

Procedia PDF Downloads 96
6519 Evaluation and Proposal for Improvement of the Flow Measurement Equipment in the Bellavista Drinking Water System of the City of Azogues

Authors: David Quevedo, Diana Coronel

Abstract:

The present article carries out an evaluation of the drinking water system in the Bellavista sector of the city of Azogues, with the purpose of determining the appropriate equipment to record the actual consumption flows of the inhabitants in said sector. Taking into account that the study area is located in a rural and economically disadvantaged area, there is an urgent need to establish a control system for the consumption of drinking water in order to conserve and manage the vital resource in the best possible way, considering that the water source supplying this sector is approximately 9km away. The research began with the collection of cartographic, demographic, and statistical data of the sector, determining the coverage area, population projection, and a provision that guarantees the supply of drinking water to meet the water needs of the sector's inhabitants. By using hydraulic modeling through the United States Environmental Protection Agency Application for Modeling Drinking Water Distribution Systems EPANET 2.0 software, theoretical hydraulic data were obtained, which were used to design and justify the most suitable measuring equipment for the Bellavista drinking water system. Taking into account a minimum service life of the drinking water system of 30 years, future flow rates were calculated for the design of the macro-measuring device. After analyzing the network, it was evident that the Bellavista sector has an average consumption of 102.87 liters per person per day, but considering that Ecuadorian regulations recommend a provision of 180 liters per person per day for the geographical conditions of the sector, this value was used for the analysis. With all the collected and calculated information, the conclusion was reached that the Bellavista drinking water system needs to have a 125mm electromagnetic macro-measuring device for the first three quinquenniums of its service life and a 150mm diameter device for the following three quinquenniums. The importance of having equipment that provides real and reliable data will allow for the control of water consumption by the population of the sector, measured through micro-measuring devices installed at the entrance of each household, which should match the readings of the macro-measuring device placed after the water storage tank outlet, in order to control losses that may occur due to leaks in the drinking water system or illegal connections.

Keywords: macrometer, hydraulics, endowment, water

Procedia PDF Downloads 74
6518 Internet Optimization by Negotiating Traffic Times

Authors: Carlos Gonzalez

Abstract:

This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client’s computer. The client using the application software will communicate to the video provider a list of the client’s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user’s hard disk, which will only be accessed by the application software in the client’s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider.

Keywords: internet optimization, video download, future demands, secure storage

Procedia PDF Downloads 137
6517 Atmospheric Dispersion Modeling for a Hypothetical Accidental Release from the 3 MW TRIGA Research Reactor of Bangladesh

Authors: G. R. Khan, Sadia Mahjabin, A. S. Mollah, M. R. Mawla

Abstract:

Atmospheric dispersion modeling is significant for any nuclear facilities in the country to predict the impact of radiological doses on environment as well as human health. That is why to ensure safety of workers and population at plant site; Atmospheric dispersion modeling and radiation dose calculations were carried out for a hypothetical accidental release of airborne radionuclide from the 3 MW TRIGA research reactor of Savar, Bangladesh. It is designed with reactor core which consists of 100 fuel elements(1.82245 cm in diameter and 38.1 cm in length), arranged in an annular corefor steady-state and square wave power level of 3 MW (thermal) and for pulsing with maximum power level of 860MWth.The fuel is in the form of a uniform mixture of 20% uranium and 80% zirconium hydride. Total effective doses (TEDs) to the public at various downwind distances were evaluated with a health physics computer code “HotSpot” developed by Lawrence Livermore National Laboratory, USA. The doses were estimated at different Pasquill stability classes (categories A-F) with site-specific averaged meteorological conditions. The meteorological data, such as, average wind speed, frequency distribution of wind direction, etc. have also been analyzed based on the data collected near the reactor site. The results of effective doses obtained remain within the recommended maximum effective dose.

Keywords: accidental release, dispersion modeling, total effective dose, TRIGA

Procedia PDF Downloads 138
6516 Research and Application of Multi-Scale Three Dimensional Plant Modeling

Authors: Weiliang Wen, Xinyu Guo, Ying Zhang, Jianjun Du, Boxiang Xiao

Abstract:

Reconstructing and analyzing three-dimensional (3D) models from situ measured data is important for a number of researches and applications in plant science, including plant phenotyping, functional-structural plant modeling (FSPM), plant germplasm resources protection, agricultural technology popularization. It has many scales like cell, tissue, organ, plant and canopy from micro to macroscopic. The techniques currently used for data capture, feature analysis, and 3D reconstruction are quite different of different scales. In this context, morphological data acquisition, 3D analysis and modeling of plants on different scales are introduced systematically. The commonly used data capture equipment for these multiscale is introduced. Then hot issues and difficulties of different scales are described respectively. Some examples are also given, such as Micron-scale phenotyping quantification and 3D microstructure reconstruction of vascular bundles within maize stalks based on micro-CT scanning, 3D reconstruction of leaf surfaces and feature extraction from point cloud acquired by using 3D handheld scanner, plant modeling by combining parameter driven 3D organ templates. Several application examples by using the 3D models and analysis results of plants are also introduced. A 3D maize canopy was constructed, and light distribution was simulated within the canopy, which was used for the designation of ideal plant type. A grape tree model was constructed from 3D digital and point cloud data, which was used for the production of science content of 11th international conference on grapevine breeding and genetics. By using the tissue models of plants, a Google glass was used to look around visually inside the plant to understand the internal structure of plants. With the development of information technology, 3D data acquisition, and data processing techniques will play a greater role in plant science.

Keywords: plant, three dimensional modeling, multi-scale, plant phenotyping, three dimensional data acquisition

Procedia PDF Downloads 278
6515 Air Dispersion Modeling for Prediction of Accidental Emission in the Atmosphere along Northern Coast of Egypt

Authors: Moustafa Osman

Abstract:

Modeling of air pollutants from the accidental release is performed for quantifying the impact of industrial facilities into the ambient air. The mathematical methods are requiring for the prediction of the accidental scenario in probability of failure-safe mode and analysis consequences to quantify the environmental damage upon human health. The initial statement of mitigation plan is supporting implementation during production and maintenance periods. In a number of mathematical methods, the flow rate at which gaseous and liquid pollutants might be accidentally released is determined from various types in term of point, line and area sources. These emissions are integrated meteorological conditions in simplified stability parameters to compare dispersion coefficients from non-continuous air pollution plumes. The differences are reflected in concentrations levels and greenhouse effect to transport the parcel load in both urban and rural areas. This research reveals that the elevation effect nearby buildings with other structure is higher 5 times more than open terrains. These results are agreed with Sutton suggestion for dispersion coefficients in different stability classes.

Keywords: air pollutants, dispersion modeling, GIS, health effect, urban planning

Procedia PDF Downloads 375
6514 Risky Driving Behavior among Bus Driver in Jakarta

Authors: Ratri A. Benedictus, Felicia M. Yolanda

Abstract:

Public transport is a crucial issue for capital city in developing country, such as Jakarta. Inadequate number and low quality of public transport services resulting personal vehicles as the main option. As a result, traffic jams are getting worse in Jakarta. The low quality of public transport, particularly buses, compounded by the risk behavior of the driver. Traffic accidents involving public bus in Jakarta were often the case, even result in fatality. The purpose of this study is to get a description of risk behavior among the public bus drivers in Jakarta. 132 bus drivers become respondent of this study. Risky Driving Behavior scale of Dorn were used. Data were analyzed using descriptive statistics. 51.5% of respondents felt often showing risky behavior while on driving. The highest type of risky driving behavior is still using the unsafe bus (62%). Followed by trespass the bus line (30%), over speed (21%), violate the road signs (15%) and driving with unhealthy physical condition (4%). Results of this study suggested that high understanding of the bus drivers on their risk behaviors have not lead to the emergence of safe driving behavior. Therefore, together with technical engineering and instrumentation work intervention over this issue, psychological aspects also need to be considered, such as: risk perception, safety attitude,safety culture, locus of control and Fatalism.

Keywords: bus driver, psychological factors, public transportation, risky driving behavior

Procedia PDF Downloads 362
6513 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use

Procedia PDF Downloads 145
6512 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 286
6511 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery

Authors: Diego Liberati

Abstract:

Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.

Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input

Procedia PDF Downloads 32
6510 A New and Simple Method of Plotting Binocular Single Vision Field (BSVF) using the Cervical Range of Motion - CROM - Device

Authors: Mihir Kothari, Heena Khan, Vivek Rathod

Abstract:

Assessment of binocular single vision field (BSVF) is traditionally done using a Goldmann perimeter. The measurement of BSVF is important for the management of incomitant strabismus, viz. orbital fractures, thyroid orbitopathy, oculomotor cranial nerve palsies, Duane syndrome etc. In this paper, we describe a new technique for measuring BSVF using a CROM device. Goldmann perimeter is very bulky and expensive (Euro 5000.00 or more) instrument which is 'almost' obsolete from the contemporary ophthalmology practice. Whereas, CROM can be easily made in the DIY (do it yourself) manner for the fraction of the price of the perimeter (only Euro 15.00). Moreover, CROM is useful for the accurate measurement of ocular torticollis vis. nystagmus, paralytic or incomitant squint etc, and it is highly portable.

Keywords: binocular single vision, perimetry, cervical rgen of motion, visual field, binocular single vision field

Procedia PDF Downloads 67
6509 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)

Authors: Salvatore Luongo, Carlo Luongo

Abstract:

This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilities

Keywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification

Procedia PDF Downloads 286
6508 Study of a Complete Free Route Implementation in the European Airspace

Authors: Cesar A. Nava-Gaxiola, C. Barrado

Abstract:

Harmonized with SESAR (Single European Sky Research) initiatives, a new concept related with airspace structures have been introduced in Europe, the Free Route Airspace. The key of free route is based in an airspace where users may freely plan a route between a defined entry and exit waypoint, with the possibility of routing via intermediate points, the free route flights remain subject to air traffic control (ATC) for the established separations. Free route airspace does not present anymore fixed airways to airspace users, as a consequence it brings a new paradigm for managing safe separations of aircrafts inside these airspace blocks . Nowadays, several European nations have been introduced the concept, some of them in a complete or partial stage, but finally offering limited benefits to airspace users for this condition. This research evaluates the future scenario of free route implementation across Europe, considering a unique airspace block configuration with a complete upper airspace with free route. The paper is centered in investigating the benefits for airspace users, and the study of possible increments of Air Traffic Controllers task loads with a full application. In this research, fast time simulations are carrying out for discovering how much flight time and distance aircrafts can save with an overall free route establishment. In the other side, the paper explains the evolution of conflicts derivate from possible separation losses between aircrafts in this new environment. Free route conflicts can emerges in any points of the airspace, requiring a great effort for solving it, in comparison with fixed airways, where conflicts normally were found by controllers in known waypoints, and they solved using the fixed network as reference. The airspace configuration modelled in this study take into account the actual navigation waypoints structure, moving into a future scenario, where new ones waypoints are added and new traffic flow patterns appears. In this sense, this research explores the advantages and unknown difficulties that a large scale application of free route concept can carry out in the European airspace.

Keywords: ATC conflicts, efficiency, free route airspace, SESAR

Procedia PDF Downloads 191
6507 Application of Fuzzy Analytical Hierarchical Process in Evaluation Supply Chain Performance Measurement

Authors: Riyadh Jamegh, AllaEldin Kassam, Sawsan Sabih

Abstract:

In modern trends of market, organizations face high-pressure environment which is characterized by globalization, high competition, and customer orientation, so it is very crucial to control and know the weak and strong points of the supply chain in order to improve their performance. So the performance measurements presented as an important tool of supply chain management because it's enabled the organizations to control, understand, and improve their efficiency. This paper aims to identify supply chain performance measurement (SCPM) by using Fuzzy Analytical Hierarchical Process (FAHP). In our real application, the performance of organizations estimated based on four parameters these are cost parameter indicator of cost (CPI), inventory turnover parameter indicator of (INPI), raw material parameter (RMPI), and safety stock level parameter indicator (SSPI), these indicators vary in impact on performance depending upon policies and strategies of organization. In this research (FAHP) technique has been used to identify the importance of such parameters, and then first fuzzy inference (FIR1) is applied to identify performance indicator of each factor depending on the importance of the factor and its value. Then, the second fuzzy inference (FIR2) also applied to integrate the effect of these indicators and identify (SCPM) which represent the required output. The developed approach provides an effective tool for evaluation of supply chain performance measurement.

Keywords: fuzzy performance measurements, supply chain, fuzzy logic, key performance indicator

Procedia PDF Downloads 145
6506 Accumulation of Trace Metals in Leaf Vegetables Cultivated in High Traffic Areas in Ghent, Belgium

Authors: Veronique Troch, Wouter Van der Borght, Véronique De Bleeker, Bram Marynissen, Nathan Van der Eecken, Gijs Du Laing

Abstract:

Among the challenges associated with increased urban food production are health risks from food contamination, due to the higher pollution loads in urban areas, compared to rural sites. Therefore, the risks posed by industrial or traffic pollution of locally grown food, was defined as one of five high-priority issues of urban agriculture requiring further investigation. The impact of air pollution on urban horticulture is the subject of this study. More particular, this study focuses on the atmospheric deposition of trace metals on leaf vegetables cultivated in the city of Ghent, Belgium. Ghent is a particularly interesting study site as it actively promotes urban agriculture. Plants accumulate heavy metals by absorption from contaminated soils and through deposition on parts exposed to polluted air. Accumulation of trace metals in vegetation grown near roads has been shown to be significantly higher than those grown in rural areas due to traffic-related contaminants in the air. Studies of vegetables demonstrated, that the uptake and accumulation of trace metals differed among crop type, species, and among plant parts. Studies on vegetables and fruit trees in Berlin, Germany, revealed significant differences in trace metal concentrations depending on local traffic, crop species, planting style and parameters related to barriers between sampling site and neighboring roads. This study aims to supplement this scarce research on heavy metal accumulation in urban horticulture. Samples from leaf vegetables were collected from different sites, including allotment gardens, in Ghent. Trace metal contents on these leaf vegetables were analyzed by ICP-MS (inductively coupled plasma mass spectrometry). In addition, precipitation on each sampling site was collected by NILU-type bulk collectors and similarly analyzed for trace metals. On one sampling site, different parameters which might influence trace metal content in leaf vegetables were analyzed in detail. These parameters are distance of planting site to the nearest road, barriers between planting site and nearest road, and type of leaf vegetable. For comparison, a rural site, located farther from city traffic and industrial pollution, was included in this study. Preliminary results show that there is a high correlation between trace metal content in the atmospheric deposition and trace metal content in leaf vegetables. Moreover, a significant higher Pb, Cu and Fe concentration was found on spinach collected from Ghent, compared to spinach collected from a rural site. The distance of planting site to the nearest road significantly affected the accumulation of Pb, Cu, Mo and Fe on spinach. Concentrations of those elements on spinach increased with decreasing distance between planting site and the nearest road. Preliminary results did not show a significant effect of barriers between planting site and the nearest road on accumulation of trace metals on leaf vegetables. The overall goal of this study is to complete and refine existing guidelines for urban gardening to exclude potential health risks from food contamination. Accordingly, this information can help city governments and civil society in the professionalization and sustainable development of urban agriculture.

Keywords: atmospheric deposition, leaf vegetables, trace metals, traffic pollution, urban agriculture

Procedia PDF Downloads 241
6505 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration

Authors: In-Hwan Baek

Abstract:

Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.

Keywords: dose-dependent, modeling, pharmacokinetics, valsartan

Procedia PDF Downloads 299
6504 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program

Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun

Abstract:

A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.

Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system

Procedia PDF Downloads 200