Search results for: parallel execution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1691

Search results for: parallel execution

1481 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 97
1480 The Influence of Disturbances Generated by Arc Furnaces on the Power Quality

Authors: Z. Olczykowski

Abstract:

The paper presents the impact of work on the electric arc furnace. Arc equipment is one of the largest receivers powered by the power system. Electric arc disturbances arising during melting process occurring in these furnaces are the cause of an abrupt change of the passive power of furnaces. Currents drawn by these devices undergo an abrupt change, which in turn cause voltage fluctuations and light flicker. The quantitative evaluation of the voltage fluctuations is now the basic criterion of assessment of an influence of unquiet receiver on the supplying net. The paper presents the method of determination of range of voltage fluctuations and light flicker at parallel operation of arc devices. The results of measurements of voltage fluctuations and light flicker indicators recorded in power supply networks of steelworks were presented, with different number of parallel arc devices. Measurements of energy quality parameters were aimed at verifying the proposed method in practice. It was also analyzed changes in other parameters of electricity: the content of higher harmonics, asymmetry, voltage dips.

Keywords: power quality, arc furnaces, propagation of voltage fluctuations, disturbances

Procedia PDF Downloads 120
1479 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems

Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu

Abstract:

Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.

Keywords: agent communication, introspective agent, isolation of agent, policy enforcement system

Procedia PDF Downloads 285
1478 An Integrated Framework for Seismic Risk Mitigation Decision Making

Authors: Mojtaba Sadeghi, Farshid Baniassadi, Hamed Kashani

Abstract:

One of the challenging issues faced by seismic retrofitting consultants and employers is quick decision-making on the demolition or retrofitting of a structure at the current time or in the future. For this reason, the existing models proposed by researchers have only covered one of the aspects of cost, execution method, and structural vulnerability. Given the effect of each factor on the final decision, it is crucial to devise a new comprehensive model capable of simultaneously covering all the factors. This study attempted to provide an integrated framework that can be utilized to select the most appropriate earthquake risk mitigation solution for buildings. This framework can overcome the limitations of current models by taking into account several factors such as cost, execution method, risk-taking and structural failure. In the newly proposed model, the database and essential information about retrofitting projects are developed based on the historical data on a retrofit project. In the next phase, an analysis is conducted in order to assess the vulnerability of the building under study. Then, artificial neural networks technique is employed to calculate the cost of retrofitting. While calculating the current price of the structure, an economic analysis is conducted to compare demolition versus retrofitting costs. At the next stage, the optimal method is identified. Finally, the implementation of the framework was demonstrated by collecting data concerning 155 previous projects.

Keywords: decision making, demolition, construction management, seismic retrofit

Procedia PDF Downloads 223
1477 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 141
1476 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.

Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses

Procedia PDF Downloads 52
1475 Research on the Aero-Heating Prediction Based on Hybrid Meshes and Hybrid Schemes

Authors: Qiming Zhang, Youda Ye, Qinxue Jiang

Abstract:

Accurate prediction of external flowfield and aero-heating at the wall of hypersonic vehicle is very crucial for the design of aircrafts. Unstructured/hybrid meshes have more powerful advantages than structured meshes in terms of pre-processing, parallel computing and mesh adaptation, so it is imperative to develop high-resolution numerical methods for the calculation of aerothermal environment on unstructured/hybrid meshes. The inviscid flux scheme is one of the most important factors affecting the accuracy of unstructured/ hybrid mesh heat flux calculation. Here, a new hybrid flux scheme is developed and the approach of interface type selection is proposed: i.e. 1) using the exact Riemann scheme solution to calculate the flux on the faces parallel to the wall; 2) employing Sterger-Warming (S-W) scheme to improve the stability of the numerical scheme in other interfaces. The results of the heat flux fit the one observed experimentally and have little dependence on grids, which show great application prospect in unstructured/ hybrid mesh.

Keywords: aero-heating prediction, computational fluid dynamics, hybrid meshes, hybrid schemes

Procedia PDF Downloads 219
1474 In Case of Possible Disaster Management with Geographic Information System in Konya

Authors: Savaş Durduran, Ceren Yağci

Abstract:

The nature of the events going on in the world, when people’s lives are considered significantly affects natural disasters. Considering thousands of years of earth history, it is seen that many natural disasters, particularly earthquakes located in our country. Behaving cautious, without occurring hazards, after being disaster is much easier and cost effective than returning to the normal life. The four phases of disaster management in the whole world has been described as; pre-disaster preparedness and mitigation, post-disaster response and rehabilitation studies. Pre-disaster and post-disaster phases has half the weight of disaster management. How much would be prepared for disaster, no matter how disaster damage reducing work gives important, we will be less harm from material and spiritual sense. To do this in a systematic way we use the Geographic Information Systems (GIS). The execution of the emergency services to be on time and emergency control mechanism against the development the most appropriate decision Geographic Information System GIS) can be useful. The execution of the emergency services to be on time and emergency control mechanism towards for developing to be the most appropriate decision Geographic Information System (GIS) can be useful. The results obtained by using products with GIS analysis of seismic data to the city, manager of the city required information and data that can be more healthy and satisfies the appropriate policy decisions can be produced. In this study, using ArcGIS software and benefiting reports of the earthquake that occurred in the Konya city, spatial and non-spatial data consisting databases created, by the help of this database a potential disaster management aimed in the city of Konya regard to urban earthquake, GIS-aided analyzes were performed.

Keywords: geographic information systems (GIS), disaster management, emergency control mechanism, Konya

Procedia PDF Downloads 456
1473 Separating Permanent and Induced Magnetic Signature: A Simple Approach

Authors: O. J. G. Somsen, G. P. M. Wagemakers

Abstract:

Magnetic signature detection provides sensitive detection of metal objects, especially in the natural environment. Our group is developing a tabletop setup for magnetic signatures of various small and model objects. A particular issue is the separation of permanent and induced magnetization. While the latter depends only on the composition and shape of the object, the former also depends on the magnetization history. With common deperming techniques, a significant permanent signature may still remain, which confuses measurements of the induced component. We investigate a basic technique of separating the two. Measurements were done by moving the object along an aluminum rail while the three field components are recorded by a detector attached near the center. This is done first with the rail parallel to the Earth magnetic field and then with anti-parallel orientation. The reversal changes the sign of the induced- but not the permanent magnetization so that the two can be separated. Our preliminary results on a small iron block show excellent reproducibility. A considerable permanent magnetization was indeed present, resulting in a complex asymmetric signature. After separation, a much more symmetric induced signature was obtained that can be studied in detail and compared with theoretical calculations.

Keywords: magnetic signature, data analysis, magnetization, deperming techniques

Procedia PDF Downloads 438
1472 Performance Improvement of The Nano-Composite Based Proton Exchange Membranes (PEMs)

Authors: Yusuf Yılmaz, Kevser Dincer, Derya Saygılı

Abstract:

In this study, performance of PEMs was experimentally investigated. Coating on the cathode side of the PEMs fuel cells was accomplished with the spray method by using NaCaNiBO. A solution having 0,1 gr NaCaNiBO +10 mL methanol was prepared. This solution was taken out and filled into a spray. Then the cathode side of PEMs fuel cells was cladded with NaCaNiBO by using spray method. After coating, the membrane was left out to dry for 24 hours. The PEM fuel cells were mounted to the system in single, double, triple and fourfold manner in order to spot the best performance. The performance parameter considered was the power to current ratio. The best performance was found to occur at the 300th second with the power/current ratio of 3.55 Watt/Ampere and on the fourfold parallel mounting after the coating; whereas the poorest performance took place at the 210th second, power to current ratio of 0.12 Watt/Ampere and on the twofold parallel connection after the coating.

Keywords: nano-composites, proton exchange membranes, performance improvement, fuel cell

Procedia PDF Downloads 358
1471 Design of S-Shape GPS Application Electrically Small Antenna

Authors: Riki H. Patel, Arpan Desai, Trushit Upadhyaya, Shobhit K. Patel

Abstract:

The micro strip antennas area has seen some inventive work in recent years and is now one of the most dynamic fields of antenna theory. A novel and simple printed wideband monopole antenna is presented. Printed on a single dielectric substrate and easily fed by using a 50 ohm microstip line, low-profile antenna structure with two parallel S-shaped meandered line of same size. In this research, S–form micro strip patch antenna is designed from measuring the prototypes of the proposed antenna one available bands with 10db return loss bandwidths of about GPS application (GPS L2 1490 MHz) and covering the 1400 to 1580 MHz frequency band at 1.5 GHz The simulated results for main parameters such as return loss, impedance bandwidth, radiation patterns and gains are also discussed herein. The modeling study shows that such antennas, in simplicity design and supply, and can satisfy GPS application. Two parallel slots are incorporated to disturb the surface flow path, introducing local inductive effect. This antenna is fed by a coaxial feeding tube.

Keywords: bandwidth, electrically small antenna, microstrip, patch antenna, GPS

Procedia PDF Downloads 480
1470 Performing the Landscape: Temporary and Performative Practices in Landscape Production

Authors: Miguel Costa

Abstract:

Despite the "time" element being an intrinsic characteristic of the work with the landscape, its execution and completion are also often dependent on external factors, i.e., the slow bureaucratic procedures required for the implementation of a project. In the urban areas of the city, these conditions are even more present — some landscape projects are articulated with the architectural/urban design, transporting itself long, expensive and inflexible processes related with the constant transformations of contemporary urban culture, where the needs and expectations could change before the project is finished. However, despite the renewed interest and growing concern for issues related to the landscapes (particularly since the European Landscape Convention, its scope and fields of action, extended to all the landscapes and not just the selected ones), still lacks the need for a greater inclusion of citizens in its protection and construction processes as well as a greater transparency and clarity of the consequences and results of their active participation. This article aims to reflect on the production processes of urban landscapes, on its completion runtime and its relationship with the citizens by introducing temporary projects as a fieldwork methodology, as well as using the contribution of different professional practices and knowledge for its monitoring, execution, and implementation. These strategies address a more interdisciplinary, transdisciplinary and performative approach, not only from the ephemeral experience of objects and actions but also from the processes and the dynamic events that are organized from these objects and actions over the landscape. The goal is to discuss the results of these approaches on its different dimensions: critical dimension; experimental and strategic dimension; pedagogical dimension; political dimension; cultural.

Keywords: landscape fieldwork, interdisciplinarity, public inclusion, public participation, temporary projects, transdisciplinarity

Procedia PDF Downloads 314
1469 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures

Authors: Sunaitan Al Mutairi

Abstract:

A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.

Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays

Procedia PDF Downloads 299
1468 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future

Authors: Gabriel Wainer

Abstract:

Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.

Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation

Procedia PDF Downloads 309
1467 Technical Assessment of Utilizing Electrical Variable Transmission Systems in Hybrid Electric Vehicles

Authors: Majid Vafaeipour, Mohamed El Baghdadi, Florian Verbelen, Peter Sergeant, Joeri Van Mierlo, Kurt Stockman, Omar Hegazy

Abstract:

The Electrical Variable Transmission (EVT), an electromechanical device, can be considered as an alternative solution to the conventional transmission system utilized in Hybrid Electric Vehicles (HEVs). This study present comparisons in terms of fuel consumption, power split, and state of charge (SoC) of an HEV containing an EVT to a conventional parallel topology and a series topology. To this end, corresponding simulations of these topologies are all performed in presence of control strategies enabling battery charge-sustaining and efficient power split. The power flow through the components of the vehicle are attained, and fuel consumption results of the considered cases are compared. The investigation of the results indicates utilizing EVT can provide significant added values in HEV configurations. The outcome of the current research paves its path for implementation of design optimization approaches on such systems in further research directions.

Keywords: Electrical Variable Transmission (EVT), Hybrid Electric Vehicle (HEV), parallel, series, modeling

Procedia PDF Downloads 224
1466 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 122
1465 Improvement to Pedestrian Walkway Facilities to Enhance Pedestrian Safety-Initiatives in India

Authors: Basavaraj Kabade, K. T. Nagaraja, Swathi Ramanathan, A. Veeraragavan, P. S. Reashma

Abstract:

Deteriorating quality of the pedestrian environment and the increasing risk of pedestrian crashes are major concerns for most of the cities in India. The recent shift in the priority to motorized transport and the abating condition of existing pedestrian facilities can be considered as prime reasons for the increasing pedestrian related crashes in India. Bengaluru City – the IT capital hub of the nation is not much different from this. The increase in number of pedestrian crashes in Bengaluru reflects the same. To resolve this issue and to ensure safe, sustainable and pedestrian friendly sidewalks, Govt. of Karnataka, India has implemented newfangled pedestrian sidewalks popularized programme named Tender S.U.R.E. (Specifications for Urban Road Execution) projects. Tender SURE adopts unique urban street design guidelines where the pedestrians are given prime preference. The present study presents an assessment of the quality and performance of the pedestrian side walk and the walkability index of the newly built pedestrian friendly sidewalks. Various physical and environmental factors affecting pedestrian safety are identified and studied in detail. The pedestrian mobility is quantified through Pedestrian Level of Service (PLoS) and the pedestrian walking comfort is measured by calculating the Walkability Index (WI). It is observed that the new initiatives taken in reference to improving pedestrian safety have succeeded in Bengaluru by attaining a level of Service of ‘A’ and with a good WI score.

Keywords: pedestrian safety, pedestrian level of service (PLoS), Right of Way (RoW), Tender S.U.R.E (Specifications for Urban Road Execution), walkability index (WI), walkway facilities

Procedia PDF Downloads 177
1464 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 355
1463 Reality of Right to Education in States of India from the Point of Stumbling to Settling the Child

Authors: Ekroop Singh Sethi, Arshnoor Kaur, M. H. Bharath

Abstract:

India is the fastest growing economy and a land of tradition, culture and realm of 19 % of the world’s children. Children are an essential part of any economy as its future GDP contributors and, therefore, it is the duty of a country to take care of its future wealth providers. Each country has its own way of child welfare. India is a developing country, has its own child welfare schemes in place, but the question is, are they really as effective as they seem? Are the schemes sufficient? And what about implementation? With 41% of the population below the age of 18, questions relating to child education and welfare require focus. Right to education is a significant act of the government of India that explains the roadmap of free and compulsory elementary education for children in India, making the India 135th country to bring education as right, involving proper support from the government to overcome the shadow of economic conditions and status which prevents children to learn and grow. But is right to education a children-centric movement? As faces the major problem of well-planned, practical curriculum and facilitators, as only 40% of grade 5 students could barely read the textbook of grade 2. Is the policy worthy of settling the child or still trapped in negative realities of the competitive environment of private VS government schools. From the steps to encouragement from the pupil's home to enlightening centers, the article focuses on level of execution, impact and difference in terms to contributing and enabling the children of India for a better tomorrow and a solution to multilayered problems of elementary education in India.

Keywords: growing economy, child welfare, right to education, elementary education, private vs government schools, pupil's home, enlightening centers, execution, impact

Procedia PDF Downloads 221
1462 Hydrodynamics of Dual Hybrid Impeller of Stirred Reactor Using Radiotracer

Authors: Noraishah Othman, Siti K. Kamarudin, Norinsan K. Othman, Mohd S. Takriff, Masli I. Rosli, Engku M. Fahmi, Mior A. Khusaini

Abstract:

The present work describes hydrodynamics of mixing characteristics of two dual hybrid impeller consisting of, radial and axial impeller using radiotracer technique. Type A mixer, a Rushton turbine is mounted above a Pitched Blade Turbine (PBT) at common shaft and Type B mixer, a Rushton turbine is mounted below PBT. The objectives of this paper are to investigate the residence time distribution (RTD) of two hybrid mixers and to represent the respective mixers by RTD model. Each type of mixer will experience five radiotracer experiments using Tc99m as source of tracer and scintillation detectors NaI(Tl) are used for tracer detection. The results showed that mixer in parallel model and mixers in series with exchange can represent the flow model in mixer A whereas only mixer in parallel model can represent Type B mixer well than other models. In conclusion, Type A impeller, Rushton impeller above PBT, reduced the presence of dead zone in the mixer significantly rather than Type B.

Keywords: hybrid impeller, residence time distribution (RTD), radiotracer experiments, RTD model

Procedia PDF Downloads 337
1461 Task Scheduling and Resource Allocation in Cloud-based on AHP Method

Authors: Zahra Ahmadi, Fazlollah Adibnia

Abstract:

Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).

Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow

Procedia PDF Downloads 131
1460 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 87
1459 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 59
1458 The Dynamics of Unsteady Squeezing Flow between Parallel Plates (Two-Dimensional)

Authors: Jiya Mohammed, Ibrahim Ismail Giwa

Abstract:

Unsteady squeezing flow of a viscous fluid between parallel plates is considered. The two plates are considered to be approaching each other symmetrically, causing the squeezing flow. Two-dimensional rectangular Cartesian coordinate is considered. The Navier-Stokes equation was reduced using similarity transformation to a single fourth order non-linear ordinary differential equation. The energy equation was transformed to a second order coupled differential equation. We obtained solution to the resulting ordinary differential equations via Homotopy Perturbation Method (HPM). HPM deforms a differential problem into a set of problem that are easier to solve and it produces analytic approximate expression in the form of an infinite power series by using only sixth and fifth terms for the velocity and temperature respectively. The results reveal that the proposed method is very effective and simple. Comparisons among present and existing solutions were provided and it is shown that the proposed method is in good agreement with Variation of Parameter Method (VPM). The effects of appropriate dimensionless parameters on the velocity profiles and temperature field are demonstrated with the aid of comprehensive graphs and tables.

Keywords: coupled differential equation, Homotopy Perturbation Method, plates, squeezing flow

Procedia PDF Downloads 457
1457 Towards a Business Process Model Deriving from an Intentional Perspective

Authors: Omnia Saidani Neffati, Rim Samia Kaabi, Naoufel Kraiem

Abstract:

In this paper, we propose an approach aiming at (i) representing services at two levels: the intentional level and the organizational level, and (ii) establishing mechanisms allowing to make a transition from the first level to the second one in order to execute intentional services. An example is used to validate our approach.

Keywords: intentional service, business process, BPMN, MDE, intentional service execution

Procedia PDF Downloads 380
1456 Petra: Simplified, Scalable Verification Using an Object-Oriented, Compositional Process Calculus

Authors: Aran Hakki, Corina Cirstea, Julian Rathke

Abstract:

Formal methods are yet to be utilized in mainstream software development due to issues in scaling and implementation costs. This work is about developing a scalable, simplified, pragmatic, formal software development method with strong correctness properties and guarantees that are easy prove. The method aims to be easy to learn, use and apply without extensive training and experience in formal methods. Petra is proposed as an object-oriented, process calculus with composable data types and sequential/parallel processes. Petra has a simple denotational semantics, which includes a definition of Correct by Construction. The aim is for Petra is to be standard which can be implemented to execute on various mainstream programming platforms such as Java. Work towards an implementation of Petra as a Java EDSL (Embedded Domain Specific Language) is also discussed.

Keywords: compositionality, formal method, software verification, Java, denotational semantics, rewriting systems, rewriting semantics, parallel processing, object-oriented programming, OOP, programming language, correct by construction

Procedia PDF Downloads 124
1455 Fast Prediction Unit Partition Decision and Accelerating the Algorithm Using Cudafor Intra and Inter Prediction of HEVC

Authors: Qiang Zhang, Chun Yuan

Abstract:

Since the PU (Prediction Unit) decision process is the most time consuming part of the emerging HEVC (High Efficient Video Coding) standardin intra and inter frame coding, this paper proposes the fast PU decision algorithm and speed up the algorithm using CUDA (Compute Unified Device Architecture). In intra frame coding, the fast PU decision algorithm uses the texture features to skip intra-frame prediction or terminal the intra-frame prediction for smaller PU size. In inter frame coding of HEVC, the fast PU decision algorithm takes use of the similarity of its own two Nx2N size PU's motion vectors and the hierarchical structure of CU (Coding Unit) partition to skip some modes of PU partition, so as to reduce the motion estimation times. The accelerate algorithm using CUDA is based on the fast PU decision algorithm which uses the GPU to make the motion search and the gradient computation could be parallel computed. The proposed algorithm achieves up to 57% time saving compared to the HM 10.0 with little rate-distortion losses (0.043dB drop and 1.82% bitrate increase on average).

Keywords: HEVC, PU decision, inter prediction, intra prediction, CUDA, parallel

Procedia PDF Downloads 384
1454 Kinematic of Thrusts and Tectonic Vergence in the Paleogene Orogen of Eastern Iran, Sechangi Area

Authors: Shahriyar Keshtgar, Mahmoud Reza Heyhat, Sasan Bagheri, Ebrahim Gholami, Seyed Naser Raiisosadat

Abstract:

The eastern Iranian range is a Z-shaped sigmoidal outcrop appearing with a NS-trending general strike on the satellite images, has already been known as the Sistan suture zone, recently identified as the product of an orogenic event introduced either by the Paleogene or Sistan orogen names. The flysch sedimentary basin of eastern Iran was filled by a huge volume of fine-grained Eocene turbiditic sediments, smaller amounts of pelagic deposits and Cretaceous ophiolitic slices, which are entirely remnants of older accretionary prisms appeared in a fold-thrust belt developed onto a subduction zone under the Lut/Afghan block, portions of the Cimmerian superterrane. In these ranges, there are Triassic sedimentary and carbonate sequences (equivalent to Nayband and Shotori Formations) along with scattered outcrops of Permian limestones (equivalent to Jamal limestone) and greenschist-facies metamorphic rocks, probably belonging to the basement of the Lut block, which have tectonic contacts with younger rocks. Moreover, the younger Eocene detrital-volcanic rocks were also thrusted onto the Cretaceous or younger turbiditic deposits. The first generation folds (parallel folds) and thrusts with slaty cleavage appeared parallel to the NE edge of the Lut block. Structural analysis shows that the most vergence of thrusts is toward the southeast so that the Permo-Triassic units in Lut have been thrusted on the younger rocks, including older (probably Jurassic) granites. Additional structural studies show that the regional transport direction in this deformation event is from northwest to the southeast where, from the outside to the inside of the orogen in the Sechengi area. Younger thrusts of the second deformation event were either directly formed as a result of the second deformation event, or they were older thrusts that reactivated and folded so that often, two sets or more slickenlines can be recognized on the thrust planes. The recent thrusts have been redistributed in directions nearly perpendicular to the edge of the Lut block and parallel to the axial surfaces of the northwest second generation large-scale folds (radial folds). Some of these younger thrusts follow the out-of-the-syncline thrust system. The both axial planes of these folds and associated penetrative shear cleavage extended towards northwest appeared with both northeast and southwest dips parallel to the younger thrusts. The large-scale buckling with the layer-parallel stress field has created this deformation event. Such consecutive deformation events perpendicular to each other cannot be basically explained by the simple linear orogen models presented for eastern Iran so far and are more consistent with the oroclinal buckling model.

Keywords: thrust, tectonic vergence, orocline buckling, sechangi, eastern iranian ranges

Procedia PDF Downloads 61
1453 Effect of Channel Variation of Two-Dimensional Water Tunnel to Study Fluid Dynamics Phenomenon

Authors: Rizka Yunita, Mas Aji Rizki Wijayanto

Abstract:

Computational fluid dynamics (CFD) is the solution to explain how fluid dynamics behavior. In this work, we obtain the effect of channel width of two-dimensional fluid visualization. Using a horizontal water tunnel and flowing soap film, we got a visualization of continuous film that can be observe a graphical overview of the flow that occurs on a space or field in which the fluid flow. The horizontal water tunnel we used, divided into three parts, expansion area, parallel area that used to test the data, and contraction area. The width of channel is the boundary of parallel area with the originally width of 7.2 cm, and the variation of channel width we observed is about 1 cm and its times. To compute the velocity, vortex shedding, and other physical parameters of fluid, we used the cyclinder circular as an obstacle to create a von Karman vortex in fluid and analyzed that phenomenon by using Particle Imaging Velocimetry (PIV) method and comparing Reynolds number and Strouhal number from the visualization we got. More than width the channel, the film is more turbulent and have a separation zones that occurs of uncontinuous flowing fluid.

Keywords: flow visualization, width of channel, vortex, Reynolds number, Strouhal number

Procedia PDF Downloads 363
1452 Nonlinear Defects and Discombinations in Anisotropic Solids

Authors: Ashkan Golgoon, Arash Yavari

Abstract:

In this paper, we present some analytical solutions for the stress fields of nonlinear anisotropic solids with line and point defects distributions. In particular, we determine the induced stress fields of a parallel cylindrically-symmetric distribution of screw dislocations in infinite orthotropic and monoclinic media as well as a cylindrically-symmetric distribution of parallel wedge disclinations in an infinite orthotropic medium. For a given distribution of edge dislocations, the material manifold is constructed using Cartan's moving frames and the stress field is obtained assuming that the medium is orthotropic. Also, we consider a spherically-symmetric distribution of point defects in a transversely isotropic spherical ball. We show that for an arbitrary incompressible transversely isotropic ball with the radial material preferred direction, a uniform point defect distribution results in a uniform hydrostatic stress field inside the spherical region the distribution is supported in. Finally, we find the stresses induced by a discombination in an orthotropic medium.

Keywords: defects, disclinations, dislocations, monoclinic solids, nonlinear elasticity, orthotropic solids, transversely isotropic solids

Procedia PDF Downloads 240