Search results for: traffic measurement and modeling
5423 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students
Authors: Lily Ranjbar, Haori Yang
Abstract:
Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education
Procedia PDF Downloads 925422 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology
Authors: Anjian Chen, Joseph C. Chen
Abstract:
This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.Keywords: additive manufacturing, fused deposition modeling, surface roughness, six-sigma, Taguchi method, 3D printing
Procedia PDF Downloads 3955421 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound
Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo
Abstract:
Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.Keywords: mitral regurgitation, vena contracta, color doppler, image processing
Procedia PDF Downloads 3715420 A Method for Processing Unwanted Target Caused by Reflection in Secondary Surveillance Radar
Authors: Khanh D.Do, Loi V.Nguyen, Thanh N.Nguyen, Thang M.Nguyen, Vu T.Tran
Abstract:
Along with the development of Secondary surveillance radar (SSR) in air traffic surveillance systems, the Multipath phenomena has always been a noticeable problem. This following article discusses the geometrical aspect and power aspect of the Multipath interference caused by reflection in SSR and proposes a method to deal with these unwanted multipath targets (ghosts) by false-target position predicting and adaptive target suppressing. A field-experiment example is mentioned at the end of the article to demonstrate the efficiency of this measure.Keywords: multipath, secondary surveillance radar, digital signal processing, reflection
Procedia PDF Downloads 1645419 The Dark Side of the Fight against Organised Crime
Authors: Ana M. Prieto del Pino
Abstract:
As is well known, UN Convention against Illicit Traffic in Narcotic Drugs and Psychotropic Substances (1988) was a landmark regarding the seizure of proceeds of crime. Depriving criminals of the profits from their activity became a priority at an international level in the fight against organised crime. Enabling confiscation of proceeds of illicit traffic in narcotic drugs and psychotropic substances, criminalising money laundering and confiscating the proceeds thereof are the three measures taken in order to achieve that purpose. The beginning of 21st century brought the declaration of war on corruption and on the illicit enjoyment of the profits thereof onto the international scene. According to the UN Convention against Transnational Organised Crime (2000), States Parties should adopt the necessary measures to enable the confiscation of proceeds of crime derived from offences (or property of equivalent value) and property, equipment and other instrumentalities used in offences covered by that Convention. The UN Convention against Corruption (2003) states asset recovery explicitly as a fundamental principle and sets forth measures aiming at the direct recovery of property through international cooperation in confiscation. Furthermore, European legislation has made many significant strides forward in less than twenty years concerning money laundering, confiscation, and asset recovery. Crime does not pay, let there be no doubt about it. Nevertheless, we must be very careful not to sing out of tune with individual rights and legal guarantees. On the one hand, innocent individuals and businesses must be protected, since they should not pay for the guilty ones’ faults. On the other hand, the rule of law must be preserved and not be tossed aside regarding those who have carried out criminal activities. An in-depth analysis of judicial decisions on money laundering and confiscation of proceeds of crime issued by European national courts and by the European Court of Human Rights in the last decade has been carried out from a human rights, legal guarantees and criminal law basic principles’ perspective. The undertaken study has revealed the violation of the right to property, of the proportionality principle legal and the infringement of basic principles of states’ domestic substantive and procedural criminal law systems. The most relevant ones have to do with the punishment of money laundering committed through negligence, non-conviction based confiscation and a too-far reaching interpretation of the notion of ‘proceeds of crime’. Almost everything in life has a bright and a dark side. Confiscation of criminal proceeds and asset recovery are not an exception to this rule.Keywords: confiscation, human rights, money laundering, organized crime
Procedia PDF Downloads 1395418 The Role of Speed Reduction Model in Urban Highways Tunnels Accidents
Authors: Khashayar Kazemzadeh, Mohammad Hanif Dasoomi
Abstract:
According to the increasing travel demand in cities, bridges and tunnels are viewed as one of the fundamental components of cities transportation systems. Normally, due to geometric constraints forms in the tunnels, the considered speed in the tunnels is lower than the speed in connected highways. Therefore, drivers tend to reduce the speed near the entrance of the tunnels. In this paper, the effect of speed reduction on accident happened in the entrance of the tunnels has been discussed. The relation between accidents frequency and the parameters of speed, traffic volume and time of the accident in the mentioned tunnel has been analyzed and the mathematical model has been proposed.Keywords: urban highway, accident, tunnel, mathematical model
Procedia PDF Downloads 4725417 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation
Authors: Othman Maklouf, Abdunnaser Tresh
Abstract:
Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.Keywords: GPS, IMU, Kalman filter, materials engineering
Procedia PDF Downloads 4225416 A Suggestive Framework for Measuring the Effectiveness of Social Media: An Irish Tourism Study
Authors: Colm Barcoe, Garvan Whelan
Abstract:
Over the past five years, visitations of American holidaymakers to Ireland have grown exponentially owing to the online strategies of Tourism Ireland, a Destination Marketer (DMO) with a meagre budget which is extended by their understanding of best practices to maximise their monetary allowance. This suggested framework incorporates a range of Key Performance Indicators (KPI’s) such as financial, marketing, and operational that offer a scale of measurement from which the Irish DMO can monitor the success of each promotional campaign when targeting the US and Canada. These are presented not as final solutions but rather as suggestions based on empirical evidence obtained from both primary and secondary sources. This research combines the wisdom extracted through qualitative methodologies with the objective of understanding the processes that drive both emergent and agile strategies. The Study extends the work relative to performance and examines the role of social media in the context of promoting Ireland to North America. There are two main themes that are identified and analysed in this investigation, these are the approach of the DMO when advocating Ireland as a brand and the benefits of digital platforms set against a proposed scale of KPIs, such as destination marketing, brand positioning, and identity development. The key narrative of this analysis is to focus on the power of social media when capitalising upon marketing opportunities, operating on a relatively small budget. This will always be a relevant theme of discussion due to the responsibility of an organisation like Tourism Ireland operating under the restraints imposed by government funding. The overall conclusions of this research may help inform those concerned with the implementing of social media strategies develop clearer models of measurement when promoting a destination to North America. The suggestions of this study will benefit small and medium enterprises particularly.Keywords: destination marketing, framework, measure, performance
Procedia PDF Downloads 1555415 Chemometric Analysis of Raw Milk Quality Originating from Conventional and Organic Dairy Farming in AP Vojvodina, Serbia
Authors: Sanja Podunavac-Kuzmanović, Denis Kučević, Strahinja Kovačević, Milica Karadžić, Lidija Jevrić
Abstract:
The present study describes the application of chemometric methods in analysis of milk samples which were collected in a conventional dairy farm and an organic dairy farm in AP Vojvodina, Republic of Serbia. The chemometric analysis included the application of univariate regression modeling and Analysis of Variance (ANOVA) method. The ANOVA was used in order to determine the differences in fatty acids content in the milk samples from conventional and organic farm. The results of the ANOVA testing indicate that there is a highly statistically significant difference between the content of fatty acid (saturated fatty acid vs. unsaturated fatty acids) in different dairy farming. Besides, the linear univariate models have been obtained as a result of modeling the linear relationships between the milk fat content and saturated fatty acids content, and the linear relationships between the milk fat content and unsaturated fatty acids content. The models obtained on the basis of the milk samples which originate from the organic farming are statistically better than the models based on the milk samples from conventional farming.Keywords: hemometrics, milk, organic farming, quality control
Procedia PDF Downloads 2405414 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-
Authors: Nieto Bernal Wilson, Carmona Suarez Edgar
Abstract:
The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse
Procedia PDF Downloads 4125413 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation
Authors: Sai Hung Cheung, Zhe Shao
Abstract:
Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.Keywords: response surface model, subset simulation, structural reliability, Tsunami risk
Procedia PDF Downloads 3865412 Mechanical Design of External Pressure Vessel to an AUV
Authors: Artur Siqueira Nóbrega de Freitas
Abstract:
The Autonomous Underwater Vehicles (AUV), as well the Remotely Operated Vehicles (ROV), are unmanned technologies used in oceanographic investigations, offshore oil extraction, military applications, among others. Differently from AUVs, ROVs uses a physical connection with the surface for energy supply e data traffic. The AUVs use batteries and embedded data acquisition systems. These technologies have progressed, supported by studies in the areas of robotics, embedded systems, naval engineering, etc. This work presents a methodology for external pressure vessel design, responsible for contain and keep the internal components of the vehicle, such as on-board electronics and sensors, isolated from contact with water, creating a pressure differential between the inner and external regions.Keywords: vessel, external pressure, AUV, buckling
Procedia PDF Downloads 5235411 TerraEnhance: High-Resolution Digital Elevation Model Generation using GANs
Authors: Siddharth Sarma, Ayush Majumdar, Nidhi Sabu, Mufaddal Jiruwaala, Shilpa Paygude
Abstract:
Digital Elevation Models (DEMs) are digital representations of the Earth’s topography, which include information about the elevation, slope, aspect, and other terrain attributes. DEMs play a crucial role in various applications, including terrain analysis, urban planning, and environmental modeling. In this paper, TerraEnhance is proposed, a distinct approach for high-resolution DEM generation using Generative Adversarial Networks (GANs) combined with Real-ESRGANs. By learning from a dataset of low-resolution DEMs, the GANs are trained to upscale the data by 10 times, resulting in significantly enhanced DEMs with improved resolution and finer details. The integration of Real-ESRGANs further enhances visual quality, leading to more accurate representations of the terrain. A post-processing layer is introduced, employing high-pass filtering to refine the generated DEMs, preserving important details while reducing noise and artifacts. The results demonstrate that TerraEnhance outperforms existing methods, producing high-fidelity DEMs with intricate terrain features and exceptional accuracy. These advancements make TerraEnhance suitable for various applications, such as terrain analysis and precise environmental modeling.Keywords: DEM, ESRGAN, image upscaling, super resolution, computer vision
Procedia PDF Downloads 115410 Alloy Design of Single Crystal Ni-base Superalloys by Combined Method of Neural Network and CALPHAD
Authors: Mehdi Montakhabrazlighi, Ercan Balikci
Abstract:
The neural network (NN) method is applied to alloy development of single crystal Ni-base Superalloys with low density and improved mechanical strength. A set of 1200 dataset which includes chemical composition of the alloys, applied stress and temperature as inputs and density and time to rupture as outputs is used for training and testing the network. Thermodynamic phase diagram modeling of the screened alloys is performed with Thermocalc software to model the equilibrium phases and also microsegregation in solidification processing. The model is first trained by 80% of the data and the 20% rest is used to test it. Comparing the predicted values and the experimental ones showed that a well-trained network is capable of accurately predicting the density and time to rupture strength of the Ni-base superalloys. Modeling results is used to determine the effect of alloying elements, stress, temperature and gamma-prime phase volume fraction on rupture strength of the Ni-base superalloys. This approach is in line with the materials genome initiative and integrated computed materials engineering approaches promoted recently with the aim of reducing the cost and time for development of new alloys for critical aerospace components. This work has been funded by TUBITAK under grant number 112M783.Keywords: neural network, rupture strength, superalloy, thermocalc
Procedia PDF Downloads 3165409 Microstructural Mechanical Properties of Human Trabecular Bone Based on Nanoindentation Test
Authors: K. Jankowski, M. Pawlikowski, A. Makuch, K. Skalski
Abstract:
Depth-sensing indentation (DSI) or nanoindentation is becoming a more and more popular method of measuring mechanical properties of various materials and tissues at a micro-scale. This technique allows measurements without complicated sample preparation procedures which makes this method very useful. As a result of measurement force and displacement of the intender are obtained. It is also possible to determine three measures of hardness i.e. Martens hardness (HM), nanohardness (HIT), Vickers hardness (HV) and Young modulus EIT. In this work trabecular bone mechanical properties were investigated. The bone samples were harvested from human femoral heads during hip replacement surgery. Patients were of different age, sexes and stages of tissue degeneration caused by osteoarthritis. The specimens were divided into three groups. Each group contained samples harvested from patients of different range of age. All samples were investigated with the same measurement conditions. The maximum load was Pmax=500 mN and the loading rate was 500 mN/min. The tests were held without hold at the peak force. The tests were conducted with indenter Vickers tip and spherical tip of the diameter 0.2 mm. Each trabecular bone sample was tested 7 times in a close area of the same trabecula. The measured loading P as a function of indentation depth allowed to obtain hysteresis loop and HM, HIT, HV, EIT. Results for arbitrarily chosen sample are HM=289.95 ± 42.31 MPa, HIT=430.75 ± 45.37 MPa, HV=40.66 ± 4.28 Vickers, EIT=7.37 ± 1.84 GPa for Vickers tip and HM=115.19 ± 15.03 MPa, HIT=165.80 ± 19.30 MPa, HV=16.90 ± 1.97 Vickers, EIT=5.30 ± 1.31 GPa for spherical tip. Results of nanoindentation tests show that this method is very useful and is perfect for obtaining mechanical properties of trabecular bone. Estimated values of elastic modulus are similar. The differences between hardness are significant but it is a result of using two different types of tips. However, it has to be emphasised that the differences in the values of elastic modulus and hardness result from different testing protocols, anisotropy and asymmetry of the micro-samples and the hydration of bone.Keywords: human bone, mechanical properties, nano hardness nanoindentation, trabecular bone
Procedia PDF Downloads 2775408 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants
Authors: S. Ebadzadsahraei, H. Kazemian
Abstract:
The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs
Procedia PDF Downloads 1435407 Microclimate Variations in Rio de Janeiro Related to Massive Public Transportation
Authors: Marco E. O. Jardim, Frederico A. M. Souza, Valeria M. Bastos, Myrian C. A. Costa, Nelson F. F. Ebecken
Abstract:
Urban public transportation in Rio de Janeiro is based on bus lines, powered by diesel, and four limited metro lines that support only some neighborhoods. This work presents an infrastructure built to better understand microclimate variations related to massive urban transportation in some specific areas of the city. The use of sensor nodes with small analytics capacity provides environmental information to population or public services. The analyses of data collected from a few small sensors positioned near some heavy traffic streets show the harmful impact due to poor bus route plan.Keywords: big data, IoT, public transportation, public health system
Procedia PDF Downloads 2555406 Perception of Safety of Workers with Different Job Levels at Construction Sites
Authors: Muhammad Dawood Idrees, Arsalan Ansari
Abstract:
Construction industry is considered as one of the most dangerous industry because workers' safety is always a major concern due to extensive number of accidents, injuries, and casualties at worksites. There are various causes of accidents at construction sites, several factors are influencing on the perception of safety of workers and psychological factors are one of them. Perception of safety varies from region to region and it also varies by demographics of workers, such as gender, age, education, job level, etc. However, research on different level of workers, such as labor and managerial staff to evaluate the impact of psychological factor is limited. Objective of this research is to evaluate the effect of psychological factors with different job level of workers. An extensive literature review was conducted to find the casual relationship between psychological factors and perception of safety, and a hypothetical structure model was developed based upon literature review. A survey instrument based upon psychological factors was developed and data was obtained from several construction sites. Structure Equation Modeling (SEM) technique was adopted in order to examine the effect of psychological factors on the perception of safety of workers with different job levels of workers. The results of this analysis reveal that job security and organizational relationships are most affecting factors in labor staff, therefore job satisfaction, mental stress, and workload are dominant in managerial staff.Keywords: accidents, job level of workers, perception of safety, structural equation modeling
Procedia PDF Downloads 1615405 Solar Calculations of Modified Arch (Semi-Spherical) Type Greenhouse System for Bayburt City
Authors: Uğur Çakir, Erol Şahin, Kemal Çomakli, Ayşegül Çokgez Kuş
Abstract:
Solar energy is thought as main source of all energy sources on the world and it can be used in many applications like agricultural areas, heating cooling or direct electricity production directly or indirectly. Greenhousing is the first one of the agricultural activities that solar energy can be used directly in. Greenhouses offer us suitable conditions which can be controlled easily for the growth of the plant and they are made by using a covering material that allows the sun light entering into the system. Covering material can be glass, fiber glass, plastic or another transparent element. This study investigates the solar energy usability rates and solar energy benefiting rates of a semi-spherical (modified arch) type greenhouse system according to different orientations and positions which exists under climatic conditions of Bayburt. In the concept of this study it is tried to determine the best direction and best sizes of a semi-spherical greenhouse to get best solar benefit from the sun. To achieve this aim a modeling study is made by using MATLAB. However this modeling study is running for some determined shapes and greenhouses it can be used for different shaped greenhouses or buildings. The basic parameters are determined as greenhouse azimuth angle, the rate of size of long edge to short and seasonal solar energy gaining of greenhouse.Keywords: greenhousing, solar energy, direct radiation, renewable energy
Procedia PDF Downloads 4805404 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico
Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano
Abstract:
In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.Keywords: accidents, grade crossing, railroad, traffic safety
Procedia PDF Downloads 1095403 Testifying in Court as a Victim of Crime for Persons with Little or No Functional Speech: Vocabulary Implications
Authors: Robyn White, Juan Bornman, Ensa Johnson
Abstract:
People with disabilities are at a high risk of becoming victims of crime. Individuals with little or no functional speech (LNFS) face an even higher risk. One way of reducing the risk of remaining a victim of crime is to face the alleged perpetrator in court as a witness – therefore it is important for a person with LNFS who has been a victim of crime to have the required vocabulary to testify in court. The aim of this study was to identify and describe the core and fringe legal vocabulary required by illiterate victims of crime, who have little or no functional speech, to testify in court as witnesses. A mixed-method, the exploratory sequential design consisting of two distinct phases was used to address the aim of the research. The first phase was of a qualitative nature and included two different data sources, namely in-depth semi-structured interviews and focus group discussions. The overall aim of this phase was to identify and describe core and fringe legal vocabulary and to develop a measurement instrument based on these results. Results from Phase 1 were used in Phase 2, the quantitative phase, during which the measurement instrument (a custom-designed questionnaire) was socially validated. The results produced six distinct vocabulary categories that represent the legal core vocabulary and 99 words that represent the legal fringe vocabulary. The findings suggested that communication boards should be individualised to the individual and the specific crime. It is believed that the vocabulary lists developed in this study act as a valid and reliable springboard from which communication boards can be developed. Recommendations were therefore made to develop an Alternative and Augmentative Communication Resource Tool Kit to assist the legal justice system.Keywords: augmentative and alternative communication, person with little or no functional speech, sexual crimes, testifying in court, victim of crime, witness competency
Procedia PDF Downloads 4815402 Performance Measurement by Analytic Hierarchy Process in Performance Based Logistics
Authors: M. Hilmi Ozdemir, Gokhan Ozkan
Abstract:
Performance Based Logistics (PBL) is a strategic approach that enables creating long-term and win-win relations among stakeholders in the acquisition. Contrary to the traditional single transactions, the expected value is created by the performance of the service pertaining to the strategic relationships in this approach. PBL motivates all relevant stakeholders to focus on their core competencies to produce the desired outcome in a collective way. The desired outcome can only be assured with a cost effective way as long as it is periodically measured with the right performance parameters. Thus, defining these parameters is a crucial step for the PBL contracts. In performance parameter determination, Analytic Hierarchy Process (AHP), which is a multi-criteria decision making methodology for complex cases, was used within this study for a complex system. AHP has been extensively applied in various areas including supply chain, inventory management, outsourcing, and logistics. This methodology made it possible to convert end-user’s main operation and maintenance requirements to sub criteria contained by a single performance parameter. Those requirements were categorized and assigned weights by the relevant stakeholders. Single performance parameter capable of measuring the overall performance of a complex system is the major outcome of this study. The parameter deals with the integrated assessment of different functions spanning from training, operation, maintenance, reporting, and documentation that are implemented within a complex system. The aim of this study is to show the methodology and processes implemented to identify a single performance parameter for measuring the whole performance of a complex system within a PBL contract. AHP methodology is recommended as an option for the researches and the practitioners who seek for a lean and integrated approach for performance assessment within PBL contracts. The implementation of AHP methodology in this study may help PBL practitioners from methodological perception and add value to AHP in becoming prevalent.Keywords: analytic hierarchy process, performance based logistics, performance measurement, performance parameters
Procedia PDF Downloads 2815401 Establishing Econometric Modeling Equations for Lumpy Skin Disease Outbreaks in the Nile Delta of Egypt under Current Climate Conditions
Authors: Abdelgawad, Salah El-Tahawy
Abstract:
This paper aimed to establish econometrical equation models for the Nile delta region in Egypt, which will represent a basement for future predictions of Lumpy skin disease outbreaks and its pathway in relation to climate change. Data of lumpy skin disease (LSD) outbreaks were collected from the cattle farms located in the provinces representing the Nile delta region during 1 January, 2015 to December, 2015. The obtained results indicated that there was a significant association between the degree of the LSD outbreaks and the investigated climate factors (temperature, wind speed, and humidity) and the outbreaks peaked during the months of June, July, and August and gradually decreased to the lowest rate in January, February, and December. The model obtained depicted that the increment of these climate factors were associated with evidently increment on LSD outbreaks on the Nile Delta of Egypt. The model validation process was done by the root mean square error (RMSE) and means bias (MB) which compared the number of LSD outbreaks expected with the number of observed outbreaks and estimated the confidence level of the model. The value of RMSE was 1.38% and MB was 99.50% confirming that this established model described the current association between the LSD outbreaks and the change on climate factors and also can be used as a base for predicting the of LSD outbreaks depending on the climatic change on the future.Keywords: LSD, climate factors, Nile delta, modeling
Procedia PDF Downloads 2905400 Computational Agent-Based Approach for Addressing the Consequences of Releasing Gene Drive Mosquito to Control Malaria
Authors: Imran Hashmi, Sipkaduwa Arachchige Sashika Sureni Wickramasooriya
Abstract:
Gene-drive technology has emerged as a promising tool for disease control by influencing the population dynamics of disease-carrying organisms. Various gene drive mechanisms, derived from global laboratory experiments, aim to strategically manage and prevent the spread of targeted diseases. One prominent strategy involves population replacement, wherein genetically modified mosquitoes are introduced to replace the existing local wild population. To enhance our understanding and aid in the design of effective release strategies, we employ a comprehensive mathematical model. The utilized approach employs agent-based modeling, enabling the consideration of individual mosquito attributes and flexibility in parameter manipulation. Through the integration of an agent-based model and a meta-population spatial approach, the dynamics of gene drive mosquito spreading in a released site are simulated. The model's outcomes offer valuable insights into future population dynamics, providing guidance for the development of informed release strategies. This research significantly contributes to the ongoing discourse on the responsible and effective implementation of gene drive technology for disease vector control.Keywords: gene drive, agent-based modeling, disease-carrying organisms, malaria
Procedia PDF Downloads 655399 Evaluation of E-Government Service Quality
Authors: Nguyen Manh Hien
Abstract:
Service quality is the highest requirement from users, especially for the service in electronic government. During the past decades, it has become a major area of academic investigation. Considering this issue, there are many researches that evaluated the dimensions and e-service contexts. This study also identified the dimensions of service quality but focused on a new conceptual and provides a new methodological in developing measurement scales of e-service quality such as information quality, service quality and organization quality. Finally, the study will suggest a key factor to evaluate e-government service quality better.Keywords: dimensionality, e-government, e-service, e-service quality
Procedia PDF Downloads 5455398 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 1715397 Three Dimensional Computational Fluid Dynamics Simulation of Wall Condensation inside Inclined Tubes
Authors: Amirhosein Moonesi Shabestary, Eckhard Krepper, Dirk Lucas
Abstract:
The current PhD project comprises CFD-modeling and simulation of condensation and heat transfer inside horizontal pipes. Condensation plays an important role in emergency cooling systems of reactors. The emergency cooling system consists of inclined horizontal pipes which are immersed in a tank of subcooled water. In the case of an accident the water level in the core is decreasing, steam comes in the emergency pipes, and due to the subcooled water around the pipe, this steam will start to condense. These horizontal pipes act as a strong heat sink which is responsible for a quick depressurization of the reactor core when any accident happens. This project is defined in order to model all these processes which happening in the emergency cooling systems. The most focus of the project is on detection of different morphologies such as annular flow, stratified flow, slug flow and plug flow. This project is an ongoing project which has been started 1 year ago in Helmholtz Zentrum Dresden Rossendorf (HZDR), Fluid Dynamics department. In HZDR most in cooperation with ANSYS different models are developed for modeling multiphase flows. Inhomogeneous MUSIG model considers the bubble size distribution and is used for modeling small-scaled dispersed gas phase. AIAD (Algebraic Interfacial Area Density Model) is developed for detection of the local morphology and corresponding switch between them. The recent model is GENTOP combines both concepts. GENTOP is able to simulate co-existing large-scaled (continuous) and small-scaled (polydispersed) structures. All these models are validated for adiabatic cases without any phase change. Therefore, the start point of the current PhD project is using the available models and trying to integrate phase transition and wall condensing models into them. In order to simplify the idea of condensation inside horizontal tubes, 3 steps have been defined. The first step is the investigation of condensation inside a horizontal tube by considering only direct contact condensation (DCC) and neglect wall condensation. Therefore, the inlet of the pipe is considered to be annular flow. In this step, AIAD model is used in order to detect the interface. The second step is the extension of the model to consider wall condensation as well which is closer to the reality. In this step, the inlet is pure steam, and due to the wall condensation, a liquid film occurs near the wall which leads to annular flow. The last step will be modeling of different morphologies which are occurring inside the tube during the condensation via using GENTOP model. By using GENTOP, the dispersed phase is able to be considered and simulated. Finally, the results of the simulations will be validated by experimental data which will be available also in HZDR.Keywords: wall condensation, direct contact condensation, AIAD model, morphology detection
Procedia PDF Downloads 3075396 Secondary Charged Fragments Tracking for On-Line Beam Range Monitoring in Particle Therapy
Authors: G. Traini, G. Battistoni, F. Collamati, E. De Lucia, R. Faccini, C. Mancini-Terracciano, M. Marafini, I. Mattei, S. Muraro, A. Sarti, A. Sciubba, E. Solfaroli Camillocci, M. Toppi, S. M. Valle, C. Voena, V. Patera
Abstract:
In Particle Therapy (PT) treatments a large amount of secondary particles, whose emission point is correlated to the dose released in the crossed tissues, is produced. The measurement of the secondary charged fragments component could represent a valid technique to monitor the beam range during the PT treatments, that is a still missing item in the clinical practice. A sub-millimetrical precision on the beam range measurement is required to significantly optimise the technique and to improve the treatment quality. In this contribution, a detector, named Dose Profiler (DP), is presented. It is specifically planned to monitor on-line the beam range exploiting the secondary charged particles produced in PT Carbon ions treatment. In particular, the DP is designed to track the secondary fragments emitted at large angles with respect to the beam direction (mainly protons), with the aim to reconstruct the spatial coordinates of the fragment emission point extrapolating the measured track toward the beam axis. The DP is currently under development within of the INSIDE collaboration (Innovative Solutions for In-beam Dosimetry in hadrontherapy). The tracker is made by six layers (20 × 20 cm²) of BCF-12 square scintillating fibres (500 μm) coupled to Silicon Photo-Multipliers, followed by two plastic scintillator layers of 6 mm thickness. A system of front-end boards based on FPGAs arranged around the detector provides the data acquisition. The detector characterization with cosmic rays is currently undergoing, and a data taking campaign with protons will take place in May 2017. The DP design and the performances measured with using MIPs and protons beam will be reviewed.Keywords: fragmentation, monitoring, particle therapy, tracking
Procedia PDF Downloads 2355395 Estimation of Delay Due to Loading–Unloading of Passengers by Buses and Reduction of Number of Lanes at Selected Intersections in Dhaka City
Abstract:
One of the significant reasons that increase the delay time in the intersections at heterogeneous traffic condition is a sudden reduction of the capacity of the roads. In this study, the delay for this sudden capacity reduction is estimated. Two intersections at Dhaka city were brought in to thestudy, i.e., Kakrail intersection, and SAARC Foara intersection. At Kakrail intersection, the sudden reduction of capacity in the roads is seen at three downstream legs of the intersection, which are because of slowing down or stopping of buses for loading and unloading of passengers. At SAARC Foara intersection, sudden reduction of capacity was seen at two downstream legs. At one leg, it was due to loading and unloading of buses, and at another leg, it was for both loading and unloading of buses and reduction of the number of lanes. With these considerations, the delay due to intentional stoppage or slowing down of buses and reduction of the number of lanes for these two intersections are estimated. Here the delay was calculated by two approaches. The first approach came from the concept of shock waves in traffic streams. Here the delay was calculated by determining the flow, density, and speed before and after the sudden capacity reduction. The second approach came from the deterministic analysis of queues. Here the delay is calculated by determining the volume, capacity and reduced capacity of the road. After determining the delay from these two approaches, the results were compared. For this study, the video of each of the two intersections was recorded for one hour at the evening peak. Necessary geometric data were also taken to determine speed, flow, and density, etc. parameters. The delay was calculated for one hour with one-hour data at both intersections. In case of Kakrail intersection, the per hour delay for Kakrail circle leg was 5.79, and 7.15 minutes, for Shantinagar cross intersection leg they were 13.02 and 15.65 minutes, and for Paltan T intersection leg, they were 3 and 1.3 minutes for 1st and 2nd approaches respectively. In the case of SAARC Foara intersection, the delay at Shahbag leg was only due to intentional stopping or slowing down of busses, which were 3.2 and 3 minutes respectively for both approaches. For the Karwan Bazar leg, the delays for buses by both approaches were 5 and 7.5 minutes respectively, and for reduction of the number of lanes, the delays for both approaches were 2 and 1.78 minutes respectively. Measuring the delay per hour for the Kakrail leg at Kakrail circle, it is seen that, with consideration of the first approach of delay estimation, the intentional stoppage and lowering of speed by buses contribute to 26.24% of total delay at Kakrail circle. If the loading and unloading of buses at intersection is made forbidden near intersection, and any other measures for loading and unloading of passengers are established far enough from the intersections, then the delay at intersections can be reduced at significant scale, and the performance of the intersections can be enhanced.Keywords: delay, deterministic queue analysis, shock wave, passenger loading-unloading
Procedia PDF Downloads 1785394 Analyzing How Working From Home Can Lead to Higher Job Satisfaction for Employees Who Have Care Responsibilities Using Structural Equation Modeling
Authors: Christian Louis Kühner, Florian Pfeffel, Valentin Nickolai
Abstract:
Taking care of children, dependents, or pets can be a difficult and time-consuming task. Especially for part- and full-time employees, it can feel exhausting and overwhelming to meet these obligations besides working a job. Thus, working mostly at home and not having to drive to the company can save valuable time and stress. This study aims to show the influence that the working model has on the job satisfaction of employees with care responsibilities in comparison to employees who do not have such obligations. Using structural equation modeling (SEM), the three work models, “work from home”, “working remotely”, and a hybrid model, have been analyzed based on 13 influencing constructs on job satisfaction. These 13 factors have been further summarized into three groups “classic influencing factors”, “influencing factors changed by remote working”, and “new remote working influencing factors”. Based on the influencing factors on job satisfaction, an online survey was conducted with n = 684 employees from the service sector. Here, Cronbach’s alpha of the individual constructs was shown to be suitable. Furthermore, the construct validity of the constructs was confirmed by face validity, content validity, convergent validity (AVE > 0.5: CR > 0.7), and discriminant validity. In addition, confirmatory factor analysis (CFA) confirmed the model fit for the investigated sample (CMIN/DF: 2.567; CFI: 0.927; RMSEA: 0.048). The SEM-analysis has shown that the most significant influencing factor on job satisfaction is “identification with the work” with β = 0.540, followed by “Appreciation” (β = 0.151), “Compensation” (β = 0.124), “Work-Life-Balance” (β = 0.116), and “Communication and Exchange of Information” (β = 0.105). While the significance of each factor can vary depending on the work model, the SEM-analysis shows that the identification with the work is the most significant factor in all three work models and, in the case of the traditional office work model, it is the only significant influencing factor. The study shows that among the employees with care responsibilities, the higher the proportion of working from home in comparison to working from the office, the more satisfied the employees are with their job. Since the work models that meet the requirements of comprehensive care led to higher job satisfaction amongst employees with such obligations, adapting as a company to such private obligations by employees can be crucial to sustained success. Conversely, the satisfaction level of the working model where employees work at the office is higher for workers without caregiving responsibilities.Keywords: care responsibilities, home office, job satisfaction, structural equation modeling
Procedia PDF Downloads 84