Search results for: Vehicle Routing Problem with Hard Time Windows
24284 Offset Dependent Uniform Delay Mathematical Optimization Model for Signalized Traffic Network Using Differential Evolution Algorithm
Authors: Tahseen Saad, Halim Ceylan, Jonathan Weaver, Osman Nuri Çelik, Onur Gungor Sahin
Abstract:
A new concept of uniform delay offset dependent mathematical optimization problem is derived as the main objective for this study using a differential evolution algorithm. To control the coordination problem, which depends on offset selection and to estimate uniform delay based on the offset choice in a traffic signal network. The assumption is the periodic sinusoidal function for arrival and departure patterns. The cycle time is optimized at the entry links and the optimized value is used in the non-entry links as a common cycle time. The offset optimization algorithm is used to calculate the uniform delay at each link. The results are illustrated by using a case study and are compared with the canonical uniform delay model derived by Webster and the highway capacity manual’s model. The findings show new model minimizes the total uniform delay to almost half compared to conventional models. The mathematical objective function is robust. The algorithm convergence time is fast.Keywords: area traffic control, traffic flow, differential evolution, sinusoidal periodic function, uniform delay, offset variable
Procedia PDF Downloads 27424283 Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs
Authors: Verónica Díaz
Abstract:
A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.Keywords: cartesian graphs, higher education, movement modeling, problem solving
Procedia PDF Downloads 21724282 An Energy-Balanced Clustering Method on Wireless Sensor Networks
Authors: Yu-Ting Tsai, Chiun-Chieh Hsu, Yu-Chun Chu
Abstract:
In recent years, due to the development of wireless network technology, many researchers have devoted to the study of wireless sensor networks. The applications of wireless sensor network mainly use the sensor nodes to collect the required information, and send the information back to the users. Since the sensed area is difficult to reach, there are many restrictions on the design of the sensor nodes, where the most important restriction is the limited energy of sensor nodes. Because of the limited energy, researchers proposed a number of ways to reduce energy consumption and balance the load of sensor nodes in order to increase the network lifetime. In this paper, we proposed the Energy-Balanced Clustering method with Auxiliary Members on Wireless Sensor Networks(EBCAM)based on the cluster routing. The main purpose is to balance the energy consumption on the sensed area and average the distribution of dead nodes in order to avoid excessive energy consumption because of the increasing in transmission distance. In addition, we use the residual energy and average energy consumption of the nodes within the cluster to choose the cluster heads, use the multi hop transmission method to deliver the data, and dynamically adjust the transmission radius according to the load conditions. Finally, we use the auxiliary cluster members to change the delivering path according to the residual energy of the cluster head in order to its load. Finally, we compare the proposed method with the related algorithms via simulated experiments and then analyze the results. It reveals that the proposed method outperforms other algorithms in the numbers of used rounds and the average energy consumption.Keywords: auxiliary nodes, cluster, load balance, routing algorithm, wireless sensor network
Procedia PDF Downloads 27424281 Investigation of the Effects of Biodiesel Blend on Particulate-Phase Exhaust Emissions from a Light Duty Diesel Vehicle
Authors: B. Wang, W. H. Or, S.C. Lee, Y.C. Leung, B. Organ
Abstract:
This study presents an investigation of diesel vehicle particulate-phase emissions with neat ultralow sulphur diesel (B0, ULSD) and 5% waste cooking oil-based biodiesel blend (B5) in Hong Kong. A Euro VI light duty diesel vehicle was tested under transient (New European Driving Cycle (NEDC)), steady-state and idling on a chassis dynamometer. Chemical analyses including organic carbon (OC), elemental carbon (EC), as well as 30 polycyclic aromatic hydrocarbons (PAHs) and 10 oxygenated PAHs (oxy-PAHs) were conducted. The OC fuel-based emission factors (EFs) for B0 ranged from 2.86 ± 0.33 to 7.19 ± 1.51 mg/kg, and those for B5 ranged from 4.31 ± 0.64 to 15.36 ± 3.77 mg/kg, respectively. The EFs of EC were low for both fuel blends (0.25 mg/kg or below). With B5, the EFs of total PAHs were decreased as compared to B0. Specifically, B5 reduced total PAH emissions by 50.2%, 30.7%, and 15.2% over NEDC, steady-state and idling, respectively. It was found that when B5 was used, PAHs and oxy-PAHs with lower molecular weight (2 to 3 rings) were reduced whereas PAHs/oxy-PAHs with medium or high molecular weight (4 to 7 rings) were increased. Our study suggests the necessity of taking atmospheric and health factors into account for biodiesel application as an alternative motor fuel.Keywords: biodiesel, OC/EC, PAHs, vehicular emission
Procedia PDF Downloads 17024280 Development of an Autonomous Automated Guided Vehicle with Robot Manipulator under Robot Operation System Architecture
Authors: Jinsiang Shaw, Sheng-Xiang Xu
Abstract:
This paper presents the development of an autonomous automated guided vehicle (AGV) with a robot arm attached on top of it within the framework of robot operation system (ROS). ROS can provide libraries and tools, including hardware abstraction, device drivers, libraries, visualizers, message-passing, package management, etc. For this reason, this AGV can provide automatic navigation and parts transportation and pick-and-place task using robot arm for typical industrial production line use. More specifically, this AGV will be controlled by an on-board host computer running ROS software. Command signals for vehicle and robot arm control and measurement signals from various sensors are transferred to respective microcontrollers. Users can operate the AGV remotely through the TCP / IP protocol and perform SLAM (Simultaneous Localization and Mapping). An RGBD camera and LIDAR sensors are installed on the AGV, using these data to perceive the environment. For SLAM, Gmapping is used to construct the environment map by Rao-Blackwellized particle filter; and AMCL method (Adaptive Monte Carlo localization) is employed for mobile robot localization. In addition, current AGV position and orientation can be visualized by ROS toolkit. As for robot navigation and obstacle avoidance, A* for global path planning and dynamic window approach for local planning are implemented. The developed ROS AGV with a robot arm on it has been experimented in the university factory. A 2-D and 3-D map of the factory were successfully constructed by the SLAM method. Base on this map, robot navigation through the factory with and without dynamic obstacles are shown to perform well. Finally, pick-and-place of parts using robot arm and ensuing delivery in the factory by the mobile robot are also accomplished.Keywords: automated guided vehicle, navigation, robot operation system, Simultaneous Localization and Mapping
Procedia PDF Downloads 14824279 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem
Authors: Kyugneun Lee, Ikjin Lee
Abstract:
Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis
Procedia PDF Downloads 31224278 A Diagnostic Comparative Analysis of on Simultaneous Localization and Mapping (SLAM) Models for Indoor and Outdoor Route Planning and Obstacle Avoidance
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In robotics literature, the simultaneous localization and mapping (SLAM) is commonly associated with a priori-posteriori problem. The autonomous vehicle needs a neutral map to spontaneously track its local position, i.e., “localization” while at the same time a precise path estimation of the environment state is required for effective route planning and obstacle avoidance. On the other hand, the environmental noise factors can significantly intensify the inherent uncertainties in using odometry information and measurements obtained from the robot’s exteroceptive sensor which in return directly affect the overall performance of the corresponding SLAM. Therefore, the current work is primarily dedicated to provide a diagnostic analysis of six SLAM algorithms including FastSLAM, L-SLAM, GraphSLAM, Grid SLAM and DP-SLAM. A SLAM simulated environment consisting of two sets of landmark locations and robot waypoints was set based on modified EKF and UKF in MATLAB using two separate maps for indoor and outdoor route planning subject to natural and artificial obstacles. The simulation results are expected to provide an unbiased platform to compare the estimation performances of the five SLAM models as well as on the reliability of each SLAM model for indoor and outdoor applications.Keywords: route planning, obstacle, estimation performance, FastSLAM, L-SLAM, GraphSLAM, Grid SLAM, DP-SLAM
Procedia PDF Downloads 44324277 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 11424276 Vibration and Parametric Instability Analysis of Delaminated Composite Beams
Authors: A. Szekrényes
Abstract:
This paper revisits the free vibration problem of delaminated composite beams. It is shown that during the vibration of composite beams the delaminated parts are subjected to the parametric excitation. This can lead to the dynamic buckling during the motion of the structure. The equation of motion includes time-dependent stiffness and so it leads to a system of Mathieu-Hill differential equations. The free vibration analysis of beams is carried out in the usual way by using beam finite elements. The dynamic buckling problem is investigated locally, and the critical buckling forces are determined by the modified harmonic balance method by using an imposed time function of the motion. The stability diagrams are created, and the numerical predictions are compared to experimental results. The most important findings are the critical amplitudes at which delamination buckling takes place, the stability diagrams representing the instability of the system, and the realistic mode shape prediction in contrast with the unrealistic results of models available in the literature.Keywords: delamination, free vibration, parametric excitation, sweep excitation
Procedia PDF Downloads 34424275 A Mini Radar System for Low Altitude Targets Detection
Authors: Kangkang Wu, Kaizhi Wang, Zhijun Yuan
Abstract:
This paper deals with a mini radar system aimed at detecting small targets at the low latitude. The radar operates at Ku-band in the frequency modulated continuous wave (FMCW) mode with two receiving channels. The radar system has the characteristics of compactness, mobility, and low power consumption. This paper focuses on the implementation of the radar system, and the Block least mean square (Block LMS) algorithm is applied to minimize the fortuitous distortion. It is validated from a series of experiments that the track of the unmanned aerial vehicle (UAV) can be easily distinguished with the radar system.Keywords: unmanned aerial vehicle (UAV), interference, Block Least Mean Square (Block LMS) Algorithm, Frequency Modulated Continuous Wave (FMCW)
Procedia PDF Downloads 31924274 Characteristics of the Severe Rollover Crashes in the UAE Using In-Depth Crash Investigation Data
Authors: Yaser E. Hawas, Md. Didarul Alam
Abstract:
Rollover crashes are complex events entailing interactions of driver, road, vehicle, and environmental factors. The primary objective of this paper is to present an empirical approach that can be used to characterise the rollover crashes and to identify some of the important factors that may lead to rollovers. Among the studied factors are the vehicle types and the rollover occurrence rate after hitting various barrier types. The carried analysis indicated that 71% of the rollover crashes occurred after impact and the type of rollover initiation is “trip/turn over” (nearly 50%). It was also found that light trucks (LTVs) vehicles are more likely to rollover than the sedan vehicles. Barrier impacts are associated with increased incidence of rollover.Keywords: empirical, hitting barrier, in-depth crash investigation, rollover, severe crash
Procedia PDF Downloads 36924273 Designing Electronic Kanban in Assembly Line Tailboom at XYZ Corp to Reducing Lead Time
Authors: Nadhifah A. Nugraha, Dida D. Damayanti, Widia Juliani
Abstract:
Airplanes manufacturing is growing along with the increasing demand from consumers. The helicopter's tail called Tailboom is a product of the helicopter division at XYZ Corp, where the Tailboom assembly line is a pull system. Based on observations of existing conditions that occur at XYZ Corp, production is still unable to meet the demands of consumers; lead time occurs greater than the plan agreed upon by the consumers. In the assembly process, each work station experiences a lack of parts and components needed to assemble components. This happens because of the delay in getting the required part information, and there is no warning about the availability of parts needed, it makes some parts unavailable in assembly warehouse. The lack of parts and components from the previous work station causes the assembly process to stop, and the assembly line also stops at the next station. In its completion, the production time was late and not on the schedule. In resolving these problems, the controlling process is needed, which is controlling the assembly line to get all components and subassembly in the right amount and at the right time. This study applies one of Just In Time tools, namely Kanban and automation, should be added as efficiently and effectively communication line becomes electronic Kanban. The problem can be solved by reducing non-value added time, such as waiting time and idle time. The proposed results of controlling the assembly line of Tailboom result in a smooth assembly line without waiting, reduced lead time, and achieving production time according to the schedule agreed with the consumers.Keywords: kanban, e-Kanban, lead time, pull system
Procedia PDF Downloads 11424272 Efficacy of Problem Solving Approach on the Achievement of Students in Mathematics
Authors: Akintunde O. Osibamowo, Abdulrasaq O. Olusanya
Abstract:
The present study was designed to examine the effect of problem-solving approach as a medium of instruction in teaching and learning of mathematics to improve the achievement of the student. One Hundred (100) students were randomly chosen from five (5) Junior Secondary School in Ijebu-Ode Local Government Area of Ogun State, Nigeria. The data was collected through Mathematics Achievement Test (MAT) on the two groups (experimental and control group). The study confirmed that there is a significant different in the achievement of students exposed to problem-solving approach than those not exposed. The result also indicated that male students, however, had a greater mean-score than the female with no significant difference in their achievement. The result of the study supports the use of problem-solving approach in the teaching and learning of mathematics in secondary schools.Keywords: problem, achievement, teaching phases, experimental control
Procedia PDF Downloads 28824271 Ignition Interlock Device for Motorcycles
Authors: Luisito L. Lacatan, Zacha Valerie G. Ancheta, Michelangelo A. Dorado, Lester Joseph M. Ochoa, Anthony Mark G. Tayabas
Abstract:
Ignition Interlock Device or IID is a mechanism installed inside a vehicle which requires the driver to breathe into the device before starting the vehicle. If the IID detects that the alcohol level or blood alcohol content (BAC) is higher than the accepted value, the engine will not start. If the driver is not able to provide a clean breath sample, the IID will log the event, warn the driver, and then start up an alarm. The purpose of the IID is to prevent accidents due to driving under the influence (DUI). With the rise of the two-wheeled vehicle in the Philippines due to its mobility and purchasing power, IIDs are still mainly installed on four-wheeled vehicles. Even though riding the motorcycle when drunk is more dangerous, there are only a small number of installed devices on motorcycles and scooters. The general objective of this study was to develop a system with hardware and software components that would implement IID on motorcycles. The study employed a descriptive method of research. The study also concluded the following: the infrared must have a point-to-point communication, the breathalyzer on the helmet should react to ethanol, the microcontroller on the motorcycle should accept all IR signals from the helmet and interpret it and the GPS shield should have an unobstructed line-of-sight communication with the GPS satellites.Keywords: blood alcohol content, breathalyser, driving under the influence, global positioning system, global system for mobile communication
Procedia PDF Downloads 32524270 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 39724269 A Spatial Approach to Model Mortality Rates
Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang
Abstract:
Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection
Procedia PDF Downloads 17024268 Crop Classification using Unmanned Aerial Vehicle Images
Authors: Iqra Yaseen
Abstract:
One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.Keywords: image processing, UAV, YOLO, CNN, deep learning, classification
Procedia PDF Downloads 10424267 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time
Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma
Abstract:
Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.Keywords: multiclass classification, convolution neural network, OpenCV
Procedia PDF Downloads 17424266 A Probabilistic Study on Time to Cover Cracking Due to Corrosion
Authors: Chun-Qing Li, Hassan Baji, Wei Yang
Abstract:
Corrosion of steel in reinforced concrete structures is a major problem worldwide. The volume expansion of corrosion products causes concrete cover cracking, which could lead to delamination of concrete cover. The time to cover cracking plays a key role to the assessment of serviceability of reinforced concrete structures subjected to corrosion. Many analytical, numerical, and empirical models have been developed to predict the time to cracking initiation due to corrosion. In this study, a numerical model based on finite element modeling of corrosion-induced cracking process is used. In order to predict the service life based on time to cover initiation, the numerical approach is coupled with a probabilistic procedure. In this procedure, all the influential factors affecting time to cover cracking are modeled as random variables. The results show that the time to cover cracking is highly variables. It is also shown that rust product expansion ratio and the size of more porous concrete zone around the rebar are the most influential factors in predicting service life of corrosion-affected structures.Keywords: corrosion, crack width, probabilistic, service life
Procedia PDF Downloads 20624265 Analysis of an High Voltage Direct Current (HVDC) Connection Using a Real-Time Simulator Under Various Disturbances
Authors: Mankour Mohamed, Miloudi Mohamed
Abstract:
A thorough and accurate simulation is necessary for the study of a High Voltage Direct Current (HVDC) link system during various types of disturbances, including internal faults on both converters, either on the rectifier or on the inverter, as well as external faults, such as AC or DC faults on both converter sides inside the DC link party. In this study, we examine how an HVDC inverter responds to three different types of failures, including faults at the inverter valve, system control faults, and single-phase-to-ground AC faults at the sending end of the inverter side. As this phenomenon represents the most frequent problem that may affect inverter valves, particularly those based on thyristor valves (LCC (line-Commutated converter)), it is more precise to explore which circumstance generates and raises the commutation failure on inverter valves. Because of the techniques used to accelerate the simulation, digital real-time simulators are now the most potent tools that provide simulation results. The real-time-lab RT-LAB platform HYPERSIM OP-5600 is used to implement the Simulation in the Loop (SIL) technique, which is used to validate the results. It is demonstrated how to recover from both the internal faults and the AC problem. The simulation findings show how crucial a role the control system plays in fault recovery.Keywords: hypersim simulator, HVDC systems, mono-polar link, AC faults, misfiring faults
Procedia PDF Downloads 9324264 Three Issues for Integrating Artificial Intelligence into Legal Reasoning
Authors: Fausto Morais
Abstract:
Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning
Procedia PDF Downloads 14424263 Synthesis and Performance Adsorbent from Coconut Shells Polyetheretherketone for Natural Gas Storage
Authors: Umar Hayatu Sidik
Abstract:
The natural gas vehicle represents a cost-competitive, lower-emission alternative to the gasoline-fuelled vehicle. The immediate challenge that confronts natural gas is increasing its energy density. This paper addresses the question of energy density by reviewing the storage technologies for natural gas with improved adsorbent. Technical comparisons are made between storage systems containing adsorbent and conventional compressed natural gas based on the associated amount of moles contained with Compressed Natural Gas (CNG) and Adsorbed Natural Gas (ANG). We also compare gas storage in different cylinder types (1, 2, 3 and 4) based on weight factor and storage capacity. For the storage tank system, we discussed the concept of carbon adsorbents, when used in CNG tanks, offer a means of increasing onboard fuel storage and, thereby, increase the driving range of the vehicle. It confirms that the density of the stored gas in ANG is higher than that of compressed natural gas (CNG) operated at the same pressure. The obtained experimental data were correlated using linear regression analysis with common adsorption kinetic (Pseudo-first order and Pseudo-second order) and isotherm models (Sip and Toth). The pseudo-second-order kinetics describe the best fitness with a correlation coefficient of 9945 at 35 bar. For adsorption isotherms, the Sip model shows better fitness with the regression coefficient (R2) of 0.9982 and with the lowest RSMD value of 0.0148. The findings revealed the potential of adsorbent in natural gas storage applications.Keywords: natural gas, adsorbent, compressed natural gas, adsorption
Procedia PDF Downloads 5924262 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression
Authors: Anne M. Denton, Rahul Gomes, David W. Franzen
Abstract:
High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression
Procedia PDF Downloads 12624261 Performance of an Automotive Engine Running on Gasoline-Condensate Blends
Authors: Md. Ehsan, Cyrus Ashok Arupratan Atis
Abstract:
Significantly lower cost, bulk availability, absence of identification color additives and relative ease of mixing with fuels have made gas-field condensates a lucrative option as adulterant for gasoline in Bangladesh. Widespread adulteration of fuels with gas-field condensates being a problem existing mainly in developing countries like Bangladesh, Nigeria etc., research works regarding the effect of such fuel adulteration are very limited. Since the properties of the gas-field condensate vary widely depending on geographical location, studies need to be based on local condensate feeds. This study quantitatively evaluates the effects of blending of gas-field condensates with gasoline(octane) in terms of - fuel properties, engine performance and exhaust emission. Condensate samples collected from Kailashtila gas field were blended with octane, ranging from 30% to 75% by volume. However for blends with above 60% condensate, cold starting of engine became difficult. Investigation revealed that the condensate samples had significantly higher distillation temperatures compared to octane, but were not far different in terms of heating value and carbon residues. Engine tests showed Kailashtila blends performing quite similar to octane in terms of power and thermal efficiency. No noticeable knocking was observed from in-cylinder pressure traces. For all the gasoline-condensate blends the test engine ran with relatively leaner air-fuel mixture delivering slightly lower CO emissions but HC and NOx emissions were similar to octane. Road trials of a test vehicle in real traffic condition and on a standard gradient using 50%(v/v) gasoline-condensate blend were also carried out. The test vehicle did not exhibit any noticeable difference in drivability compared to octane.Keywords: condensates, engine performance, fuel adulteration, gasoline-condensate blends
Procedia PDF Downloads 24924260 Finding Viable Pollution Routes in an Urban Network under a Predefined Cost
Authors: Dimitra Alexiou, Stefanos Katsavounis, Ria Kalfakakou
Abstract:
In an urban area the determination of transportation routes should be planned so as to minimize the provoked pollution taking into account the cost of such routes. In the sequel these routes are cited as pollution routes. The transportation network is expressed by a weighted graph G= (V, E, D, P) where every vertex represents a location to be served and E contains unordered pairs (edges) of elements in V that indicate a simple road. The distances/cost and a weight that depict the provoked air pollution by a vehicle transition at every road are assigned to each road as well. These are the items of set D and P respectively. Furthermore the investigated pollution routes must not exceed predefined corresponding values concerning the route cost and the route pollution level during the vehicle transition. In this paper we present an algorithm that generates such routes in order that the decision maker selects the most appropriate one.Keywords: bi-criteria, pollution, shortest paths, computation
Procedia PDF Downloads 37224259 Experimental Analysis for the Inlet of the Brazilian Aerospace Vehicle 14-X B
Authors: João F. A. Martos, Felipe J. Costa, Sergio N. P. Laiton, Bruno C. Lima, Israel S. Rêgo, Paulo P. G. Toro
Abstract:
Nowadays, the scramjet is a topic that has attracted the attention of several scientific communities (USA, Australia, Germany, France, Japan, India, China, Russia), that are investing in this in this type of propulsion system due its interest to facilitate access to space and reach hypersonic speed, who have invested in this type of propulsion due to the interest in facilitating access to space. The Brazilian hypersonic scramjet aerospace vehicle 14-X B is a technological demonstrator of a hypersonic airbreathing propulsion system based on the supersonic combustion (scramjet) intended to be tested in flight into the Earth's atmosphere at 30 km altitude and Mach number 7. The 14-X B has been designed at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics of the Institute for Advanced Studies (IEAv) in Brazil. The IEAv Hypersonic Shock Tunnel, named T3, is a ground-test facility able to reproduce the flight conditions as the Mach number as well as pressure and temperature in the test section close to those encountered during the test flight of the vehicle 14-X B into design conditions. A 1-m long stainless steel 14-X B model was experimentally investigated at T3 Hypersonic Shock Tunnel, for freestream Mach number 7. Static pressure measurements along the lower surface of the 14-X B model, along with high-speed schlieren photographs taken from the 5.5° leading edge and the 14.5° deflection compression ramp, provided experimental data that were compared to the analytical-theoretical solutions and the computational fluid dynamics (CFD) simulations. The results show a good qualitative agreement, and in consequence demonstrating the importance of these methods in the project of the 14-X B hypersonic aerospace vehicle.Keywords: 14-X, CFD, hypersonic, hypersonic shock tunnel, scramjet
Procedia PDF Downloads 35624258 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey
Authors: Mahdiyeh Zafaranchi
Abstract:
With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters
Procedia PDF Downloads 11024257 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20424256 Investigation into Relationship between Spaced Repetitions and Problems Solving Efficiency
Authors: Sidharth Talan, Rajlakshmi G. Majumdar
Abstract:
Problem-solving skill is one the few skills which is constantly endeavored to improve upon by the professionals and academicians around the world in order to sustain themselves in the ever-growing competitive environment. The given paper focuses on evaluating a hypothesized relationship between the problems solving efficiency of an individual with spaced repetitions, conducted with a time interval of one day over a period of two weeks. The paper has utilized uni-variate regression analysis technique to assess the best fit curve that can explain the significant relationship between the given two variables. The paper has incorporated Anagrams solving as the appropriate testing process for the analysis. Since Anagrams solving involves rearranging a jumbled word to form a correct word, it projects to be an efficient process to observe the attention span, visual- motor coordination and the verbal ability of an individual. Based on the analysis for a sample population of 30, it was observed that problem-solving efficiency of an individual, measured in terms of the score in each test was found to be significantly correlated with time period measured in days.Keywords: Anagrams, histogram plot, moving average curve, spacing effect
Procedia PDF Downloads 16324255 Comparative Assessment of ABS and Disk Brake Systems
Authors: Saleh Mobasseri, Mohammad Mobasseri
Abstract:
The article refers to the history of the rise of brake system and described it’s importance in passenger’s lives. The disc brake system performance and ABS are also compared with each other by the kinetic and kinematic analysis of the braking system,and evaluate the impact of each parameters is checked on the vehicle stopping distance. Anti−lock braking system (ABS) is one of the most important features that affect on vehicle safety and for this reason much efforts have been made to improve this system. The objectives of the anti−lock system (ABS) are as follows: Preventing the wheels from locking, achieving maximum technical momentum in terms of braking,stability,reducing stopping distances. In this paper,we study the comparative of ABS brake and disc brake.Keywords: anti−lock braking System (ABS), stopping distances, booster, car stability, force exerted on the brake pedal
Procedia PDF Downloads 396