Search results for: prediction interval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3046

Search results for: prediction interval

2146 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 87
2145 Influencing Factors and Mechanism of Patient Engagement in Healthcare: A Survey in China

Authors: Qing Wu, Xuchun Ye, Kirsten Corazzini

Abstract:

Objective: It is increasingly recognized that patients’ rational and meaningful engagement in healthcare could make important contributions to their health care and safety management. However, recent evidence indicated that patients' actual roles in healthcare didn’t match their desired roles, and many patients reported a less active role than desired, which suggested that patient engagement in healthcare may be influenced by various factors. This study aimed to analyze influencing factors on patient engagement and explore the influence mechanism, which will be expected to contribute to the strategy development of patient engagement in healthcare. Methods: On the basis of analyzing the literature and theory study, the research framework was developed. According to the research framework, a cross-sectional survey was employed using the behavior and willingness of patient engagement in healthcare questionnaire, Chinese version All Aspects of Health Literacy Scale, Facilitation of Patient Involvement Scale and Wake Forest Physician Trust Scale, and other influencing factor related scales. A convenience sample of 580 patients was recruited from 8 general hospitals in Shanghai, Jiangsu Province, and Zhejiang Province. Results: The results of the cross-sectional survey indicated that the mean score for the patient engagement behavior was (4.146 ± 0.496), and the mean score for the willingness was (4.387 ± 0.459). The level of patient engagement behavior was inferior to their willingness to be involved in healthcare (t = 14.928, P < 0.01). The influencing mechanism model of patient engagement in healthcare was constructed by the path analysis. The path analysis revealed that patient attitude toward engagement, patients’ perception of facilitation of patient engagement and health literacy played direct prediction on the patients’ willingness of engagement, and standard estimated values of path coefficient were 0.341, 0.199, 0.291, respectively. Patients’ trust in physician and the willingness of engagement played direct prediction on the patient engagement, and standard estimated values of path coefficient were 0.211, 0.641, respectively. Patient attitude toward engagement, patients’ perception of facilitation and health literacy played indirect prediction on patient engagement, and standard estimated values of path coefficient were 0.219, 0.128, 0.187, respectively. Conclusions: Patients engagement behavior did not match their willingness to be involved in healthcare. The influencing mechanism model of patient engagement in healthcare was constructed. Patient attitude toward engagement, patients’ perception of facilitation of engagement and health literacy posed indirect positive influence on patient engagement through the patients’ willingness of engagement. Patients’ trust in physician and the willingness of engagement had direct positive influence on the patient engagement. Patient attitude toward engagement, patients’ perception of physician facilitation of engagement and health literacy were the factors influencing the patients’ willingness of engagement. The results of this study provided valuable evidence on guiding the development of strategies for promoting patient rational and meaningful engagement in healthcare.

Keywords: healthcare, patient engagement, influencing factor, the mechanism

Procedia PDF Downloads 157
2144 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials

Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié

Abstract:

Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.

Keywords: bio-based materials, mould growth, numerical prediction, reliability approach

Procedia PDF Downloads 48
2143 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners

Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas

Abstract:

Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.

Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy

Procedia PDF Downloads 241
2142 Perceptual Organization within Temporal Displacement

Authors: Michele Sinico

Abstract:

The psychological present has an actual extension. When a sequence of instantaneous stimuli falls in this short interval of time, observers perceive a compresence of events in succession and the temporal order depends on the qualitative relationships between the perceptual properties of the events. Two experiments were carried out to study the influence of perceptual grouping, with and without temporal displacement, on the duration of auditory sequences. The psychophysical method of adjustment was adopted. The first experiment investigated the effect of temporal displacement of a white noise on sequence duration. The second experiment investigated the effect of temporal displacement, along the pitch dimension, on temporal shortening of sequence. The results suggest that the temporal order of sounds, in the case of temporal displacement, is organized along the pitch dimension.

Keywords: time perception, perceptual present, temporal displacement, Gestalt laws of perceptual organization

Procedia PDF Downloads 252
2141 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
2140 Impact of the Time Interval in the Numerical Solution of Incompressible Flows

Authors: M. Salmanzadeh

Abstract:

In paper, we will deal with incompressible Couette flow, which represents an exact analytical solution of the Navier-Stokes equations. Couette flow is perhaps the simplest of all viscous flows, while at the same time retaining much of the same physical characteristics of a more complicated boundary-layer flow. The numerical technique that we will employ for the solution of the Couette flow is the Crank-Nicolson implicit method. Parabolic partial differential equations lend themselves to a marching solution; in addition, the use of an implicit technique allows a much larger marching step size than would be the case for an explicit solution. Hence, in the present paper we will have the opportunity to explore some aspects of CFD different from those discussed in the other papers.

Keywords: incompressible couette flow, numerical method, partial differential equation, Crank-Nicolson implicit

Procedia PDF Downloads 538
2139 Exploring the Relationship Between Helicobacter Pylori Infection and the Incidence of Bronchogenic Carcinoma

Authors: Jose R. Garcia, Lexi Frankel, Amalia Ardeljan, Sergio Medina, Ali Yasback, Omar Rashid

Abstract:

Background: Helicobacter pylori (H. pylori) is a gram-negative, spiral-shaped bacterium that affects nearly half of the population worldwide and humans serve as the principal reservoir. Infection rates usually follow an inverse relationship with hygiene practices and are higher in developing countries than developed countries. Incidence varies significantly by geographic area, race, ethnicity, age, and socioeconomic status. H. pylori is primarily associated with conditions of the gastrointestinal tract such as atrophic gastritis and duodenal peptic ulcers. Infection is also associated with an increased risk of carcinogenesis as there is evidence to show that H. pylori infection may lead to gastric adenocarcinoma and mucosa-associated lymphoid tissue (MALT) lymphoma. It is suggested that H. pylori infection may be considered as a systemic condition, leading to various novel associations with several different neoplasms such as colorectal cancer, pancreatic cancer, and lung cancer, although further research is needed. Emerging evidence suggests that H. pylori infection may offer protective effects against Mycobacterium tuberculosis as a result of non-specific induction of interferon- γ (IFN- γ). Similar methods of enhanced immunity may affect the development of bronchogenic carcinoma due to the antiproliferative, pro-apoptotic and cytostatic functions of IFN- γ. The purpose of this study was to evaluate the correlation between Helicobacter pylori infection and the incidence of bronchogenic carcinoma. Methods: The data was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to evaluate the patients infected versus patients not infected with H. pylori using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for the purpose of academic research. Standard statistical methods were used. Results:-Between January 2010 and December 2019, the query was analyzed and resulted in 163,224 in both the infected and control group, respectively. The two groups were matched by age range and CCI score. The incidence of bronchogenic carcinoma was 1.853% with 3,024 patients in the H. pylori group compared to 4.785% with 7,810 patients in the control group. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.367 (0.353 - 0.383) with a confidence interval of 95%. The two groups were matched by treatment and incidence of cancer, which resulted in a total of 101,739 patients analyzed after this match. The incidence of bronchogenic carcinoma was 1.929% with 1,962 patients in the H. pylori and treatment group compared to 4.618% with 4,698 patients in the control group with treatment. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.403 (0.383 - 0.425) with a confidence interval of 95%.

Keywords: bronchogenic carcinoma, helicobacter pylori, lung cancer, pathogen-associated molecular patterns

Procedia PDF Downloads 186
2138 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model

Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu

Abstract:

The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.

Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model

Procedia PDF Downloads 320
2137 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 122
2136 Optimal Number of Reconfigurable Robots in a Transport System

Authors: Mari Chaikovskaia, Jean-Philippe Gayon, Alain Quilliot

Abstract:

We consider a fleet of elementary robots that can be connected in different ways to transport loads of different types. For instance, a single robot can transport a small load, and the association of two robots can either transport a large load or two small loads. We seek to determine the optimal number of robots to transport a set of loads in a given time interval, with or without reconfiguration. We show that the problem with reconfiguration is strongly NP-hard by a reduction to the bin-packing problem. Then, we study a special case with unit capacities and derive simple formulas for the minimum number of robots, up to 3 types of loads. For this special case, we compare the minimum number of robots with or without reconfiguration and show that the gain is limited in absolute value but may be significant for small fleets.

Keywords: fleet sizing, reconfigurability, robots, transportation

Procedia PDF Downloads 87
2135 Comparison of Back-Projection with Non-Uniform Fast Fourier Transform for Real-Time Photoacoustic Tomography

Authors: Moung Young Lee, Chul Gyu Song

Abstract:

Photoacoustic imaging is the imaging technology that combines the optical imaging and ultrasound. This provides the high contrast and resolution due to optical imaging and ultrasound imaging, respectively. We developed the real-time photoacoustic tomography (PAT) system using linear-ultrasound transducer and digital acquisition (DAQ) board. There are two types of algorithm for reconstructing the photoacoustic signal. One is back-projection algorithm, the other is FFT algorithm. Especially, we used the non-uniform FFT algorithm. To evaluate the performance of our system and algorithms, we monitored two wires that stands at interval of 2.89 mm and 0.87 mm. Then, we compared the images reconstructed by algorithms. Finally, we monitored the two hairs crossed and compared between these algorithms.

Keywords: back-projection, image comparison, non-uniform FFT, photoacoustic tomography

Procedia PDF Downloads 434
2134 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley

Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara

Abstract:

The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.

Keywords: landslide, intensity-duration, rainfall threshold, TRMM, slope, inventory, early warning system

Procedia PDF Downloads 274
2133 Evaluation of the Analytic for Hemodynamic Instability as a Prediction Tool for Early Identification of Patient Deterioration

Authors: Bryce Benson, Sooin Lee, Ashwin Belle

Abstract:

Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.

Keywords: clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring

Procedia PDF Downloads 190
2132 Microstructural and Transport Properties of La0.7Sr0.3CoO3 Thin Films Obtained by Metal-Organic Deposition

Authors: K. Daoudi, Z. Othmen, S. El Helali, M.Oueslati, M. Oumezzine

Abstract:

La0.7Sr0.3CoO3 thin films have been epitaxially grown on LaAlO3 and SrTiO3 (001) single-crystal substrates by metal organic deposition process. The structural and micro structural properties of the obtained films have been investigated by means of high resolution X-ray diffraction, Raman spectroscopy and transmission microscopy observations on cross-sections techniques. We noted a close dependence of the crystallinity on the used substrate and the film thickness. By increasing the annealing temperature to 1000ºC and the film thickness to 100 nm, the electrical resistivity was decreased by several orders of magnitude. The film resistivity reaches approximately 3~4 x10-4 Ω.cm in a wide interval of temperature 77-320 K, making this material a promising candidate for a variety of applications.

Keywords: cobaltite, thin films, epitaxial growth, MOD, TEM

Procedia PDF Downloads 334
2131 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression

Authors: Wu Peng, Anders Liljerehn, Martin Magnevall

Abstract:

In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.

Keywords: cutting force, kienzle model, predictive model, tool flank wear

Procedia PDF Downloads 109
2130 Study for a Non-Invasive Method of Respiratory Resistance Measurement among Patients with Airways Obstructions

Authors: Aicha Laouani, Pascale Calabrese, Sonia Rouatbi, Saad Saguem

Abstract:

Distances between signals (S d) and between asters (A d) calculated from respiratory inductive plethysmography signals has been used in order to evaluation airways resistances (Raw) during reversibility test among 28 subject with airways obstructions. Correlations studies between these distances and Raw measured by body plethysmography (BP) showed that these RIP variables could be potentially used in airway resistance assessment in patients with airway obstruction. Significant correlation was found between ΔAd and airway resistance changes (ΔRaw) (r= 0.407, p=0.03) and not between ΔSd and ΔRaw. This assumption was supported by the high correlations found when relating the average of ΔS and of ΔA calculated on successive intervals of ΔRaw, with the ΔRaw averages calculated for each interval (r= 0.892, p= 0.006 and r= 0.857, p=0.006 respectively).

Keywords: airways obstruction, distances, respiratory inductive plethysmography, reversibility test

Procedia PDF Downloads 454
2129 Digital Twin for Retail Store Security

Authors: Rishi Agarwal

Abstract:

Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.

Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety

Procedia PDF Downloads 73
2128 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant

Procedia PDF Downloads 293
2127 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 158
2126 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 380
2125 The Effects of Exercise Training on LDL Mediated Blood Flow in Coronary Artery Disease: A Systematic Review

Authors: Aziza Barnawi

Abstract:

Background: Regular exercise reduces risk factors associated with cardiovascular diseases. Over the past decade, exercise interventions have been introduced to reduce the risk of and prevent coronary artery disease (CAD). Elevated low-density lipoproteins (LDL) contribute to the formation of atherosclerosis, its manifestations on the endothelial narrow the coronary artery and affect the endothelial function. Therefore, flow-mediated dilation (FMD) technique is used to assess the function. The results of previous studies have been inconsistent and difficult to interpret across different types of exercise programs. The relationship between exercise therapy and lipid levels has been extensively studied, and it is known to improve the lipid profile and endothelial function. However, the effectiveness of exercise in altering LDL levels and improving blood flow is controversial. Objective: This review aims to explore the evidence and quantify the impact of exercise training on LDL levels and vascular function by FMD. Methods: Electronic databases were searched PubMed, Google Scholar, Web of Science, the Cochrane Library, and EBSCO using the keywords: “low and/or moderate aerobic training”, “blood flow”, “atherosclerosis”, “LDL mediated blood flow”, “Cardiac Rehabilitation”, “low-density lipoproteins”, “flow-mediated dilation”, “endothelial function”, “brachial artery flow-mediated dilation”, “oxidized low-density lipoproteins” and “coronary artery disease”. The studies were conducted for 6 weeks or more and influenced LDL levels and/or FMD. Studies with different intensity training and endurance training in healthy or CAD individuals were included. Results: Twenty-one randomized controlled trials (RCTs) (14 FMD and 7 LDL studies) with 776 participants (605 exercise participants and 171 control participants) met eligibility criteria and were included in the systematic review. Endurance training resulted in a greater reduction in LDL levels and their subfractions and a better FMD response. Overall, the training groups showed improved physical fitness status compared with the control groups. Participants whose exercise duration was ≥150 minutes /week had significant improvement in FMD and LDL levels compared with those with <150 minutes/week.Conclusion: In conclusion, although the relationship between physical training, LDL levels, and blood flow in CAD is complex and multifaceted, there are promising results for controlling primary and secondary prevention of CAD by exercise. Exercise training, including resistance, aerobic, and interval training, is positively correlated with improved FMD. However, the small body of evidence for LDL studies (resistance and interval training) did not prove to be significantly associated with improved blood flow. Increasing evidence suggests that exercise training is a promising adjunctive therapy to improve cardiovascular health, potentially improving blood flow and contributing to the overall management of CAD.

Keywords: exercise training, low density lipoprotein, flow mediated dilation, coronary artery disease

Procedia PDF Downloads 74
2124 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data

Authors: Natalia Feruleva

Abstract:

The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.

Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data

Procedia PDF Downloads 121
2123 Release of PVA from PVA/PA Compounds into Water Solutions

Authors: J. Klofac, P. Bazant, I. Kuritka

Abstract:

This work is focused on the preparation of polymeric blend composed of polyamide (PA) and polyvinyl alcohol (PVA) with the intention to explore its basic characteristics important for potential use in medicine, especially for drug delivery systems. PA brings brilliant mechanical properties to the blend while PVA is inevitable due to its water solubility. Blend with different PA/PVA ratios were prepared and the release study of PVA into the water was carried out in a time interval 0-48 hours via the gravimetric method. The weight decrease is caused by the leaching of PVA domains what can be also followed by the optical and scanning electron microscopy. In addition, the thermal properties and the miscibility of blend components were evaluated by the differential scanning calorimeter. On the bases of performed experiments, it was found that the kinetics, continuity development and micro structure features of PA/PVA blends is strongly dependent on the blend composition and miscibility of its components.

Keywords: releas study, polyvinyl alcohol, polyamide morphology, polymeric blend

Procedia PDF Downloads 397
2122 Analysis of Simply Supported Beams Using Elastic Beam Theory

Authors: M. K. Dce

Abstract:

The aim of this paper is to investigate the behavior of simply supported beams having rectangular section and subjected to uniformly distributed load (UDL). In this study five beams of span 5m, 6m, 7m and 8m have been considered. The width of all the beams is 400 mm and span to depth ratio has been taken as 12. The superimposed live load has been increased from 10 kN/m to 25 kN/m at the interval of 5 kN/m. The analysis of the beams has been carried out using the elastic beam theory. On the basis of present study it has been concluded that the maximum bending moment as well as deflection occurs at the mid-span of simply supported beam and its magnitude increases in proportion to magnitude of UDL. Moreover, the study suggests that the maximum moment is proportional to square of span and maximum deflection is proportional to fourth power of span.

Keywords: beam, UDL, bending moment, deflection, elastic beam theory

Procedia PDF Downloads 391
2121 Predictability of Kiremt Rainfall Variability over the Northern Highlands of Ethiopia on Dekadal and Monthly Time Scales Using Global Sea Surface Temperature

Authors: Kibrom Hadush

Abstract:

Countries like Ethiopia, whose economy is mainly rain-fed dependent agriculture, are highly vulnerable to climate variability and weather extremes. Sub-seasonal (monthly) and dekadal forecasts are hence critical for crop production and water resource management. Therefore, this paper was conducted to study the predictability and variability of Kiremt rainfall over the northern half of Ethiopia on monthly and dekadal time scales in association with global Sea Surface Temperature (SST) at different lag time. Trends in rainfall have been analyzed on annual, seasonal (Kiremt), monthly, and dekadal (June–September) time scales based on rainfall records of 36 meteorological stations distributed across four homogenous zones of the northern half of Ethiopia for the period 1992–2017. The results from the progressive Mann–Kendall trend test and the Sen’s slope method shows that there is no significant trend in the annual, Kiremt, monthly and dekadal rainfall total at most of the station's studies. Moreover, the rainfall in the study area varies spatially and temporally, and the distribution of the rainfall pattern increases from the northeast rift valley to northwest highlands. Methods of analysis include graphical correlation and multiple linear regression model are employed to investigate the association between the global SSTs and Kiremt rainfall over the homogeneous rainfall zones and to predict monthly and dekadal (June-September) rainfall using SST predictors. The results of this study show that in general, SST in the equatorial Pacific Ocean is the main source of the predictive skill of the Kiremt rainfall variability over the northern half of Ethiopia. The regional SSTs in the Atlantic and the Indian Ocean as well contribute to the Kiremt rainfall variability over the study area. Moreover, the result of the correlation analysis showed that the decline of monthly and dekadal Kiremt rainfall over most of the homogeneous zones of the study area are caused by the corresponding persistent warming of the SST in the eastern and central equatorial Pacific Ocean during the period 1992 - 2017. It is also found that the monthly and dekadal Kiremt rainfall over the northern, northwestern highlands and northeastern lowlands of Ethiopia are positively correlated with the SST in the western equatorial Pacific, eastern and tropical northern the Atlantic Ocean. Furthermore, the SSTs in the western equatorial Pacific and Indian Oceans are positively correlated to the Kiremt season rainfall in the northeastern highlands. Overall, the results showed that the prediction models using combined SSTs at various ocean regions (equatorial and tropical) performed reasonably well in the prediction (With R2 ranging from 30% to 65%) of monthly and dekadal rainfall and recommends it can be used for efficient prediction of Kiremt rainfall over the study area to aid with systematic and informed decision making within the agricultural sector.

Keywords: dekadal, Kiremt rainfall, monthly, Northern Ethiopia, sea surface temperature

Procedia PDF Downloads 142
2120 Design of Sustainable Concrete Pavement by Incorporating RAP Aggregates

Authors: Selvam M., Vadthya Poornachandar, Surender Singh

Abstract:

These Reclaimed Asphalt Pavement (RAP) aggregates are generally dumped in the open area after the demolition of Asphalt Pavements. The utilization of RAP aggregates in cement concrete pavements may provide several socio-economic-environmental benefits and could embrace the circular economy. The cross recycling of RAP aggregates in the concrete pavement could reduce the consumption of virgin aggregates and saves the fertile land. However, the structural, as well as functional properties of RAP-concrete could be significantly lower than the conventional Pavement Quality Control (PQC) pavements. This warrants judicious selection of RAP fraction (coarse and fine aggregates) along with the accurate proportion of the same for PQC highways. Also, the selection of the RAP fraction and its proportion shall not be solely based on the mechanical properties of RAP-concrete specimens but also governed by the structural and functional behavior of the pavement system. In this study, an effort has been made to predict the optimum RAP fraction and its corresponding proportion for cement concrete pavements by considering the low-volume and high-volume roads. Initially, the effect of inclusions of RAP on the fresh and mechanical properties of concrete pavement mixes is mapped through an extensive literature survey. Almost all the studies available to date are considered for this study. Generally, Indian Roads Congress (IRC) methods are the most widely used design method in India for the analysis of concrete pavements, and the same has been considered for this study. Subsequently, fatigue damage analysis is performed to evaluate the required safe thickness of pavement slab for different fractions of RAP (coarse RAP). Consequently, the performance of RAP-concrete is predicted by employing the AASHTO-1993 model for the following distresses conditions: faulting, cracking, and smoothness. The performance prediction and total cost analysis of RAP aggregates depict that the optimum proportions of coarse RAP aggregates in the PQC mix are 35% and 50% for high volume and low volume roads, respectively.

Keywords: concrete pavement, RAP aggregate, performance prediction, pavement design

Procedia PDF Downloads 159
2119 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 106
2118 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects

Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes

Abstract:

Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.

Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction

Procedia PDF Downloads 150
2117 Proposed Alternative System for Existing Traffic Signal System

Authors: Alluri Swaroopa, L. V. N. Prasad

Abstract:

Alone with fast urbanization in world, traffic control problem became a big issue in urban construction. Having an efficient and reliable traffic control system is crucial to macro-traffic control. Traffic signal is used to manage conflicting requirement by allocating different sets of mutually compatible traffic movement during distinct time interval. Many approaches have been made proposed to solve this discrete stochastic problem. Recognizing the need to minimize right-of-way impacts while efficiently handling the anticipated high traffic volumes, the proposed alternative system gives effective design. This model allows for increased traffic capacity and reduces delays by eliminating a step in maneuvering through the freeway interchange. The concept proposed in this paper involves construction of bridges and ramps at intersection of four roads to control the vehicular congestion and to prevent traffic breakdown.

Keywords: bridges, junctions, ramps, urban traffic control

Procedia PDF Downloads 554