Search results for: 3d acoustic streaming flow visualization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5652

Search results for: 3d acoustic streaming flow visualization

2892 The Effect of the Flow Pipe Diameter on the Rheological Behavior of a Polymeric Solution (CMC)

Authors: H. Abchiche, M. Mellal

Abstract:

The aim of this work is to study the parameters that influence the rheological behavior of a complex fluid (sodium Carboxyméthylcellulose solution), on a capillary rheometer. An installation has been made to be able to vary the diameter of trial conducts. The obtained results allowed us to deduce that: the diameter of trial conducts have a remarkable effect on the rheological responds.

Keywords: bingham’s fluid, CMC, cylindrical conduit, rheological behavior

Procedia PDF Downloads 323
2891 The Immunosuppressive Effects of Silymarin with Rapamaycin on the Proliferation and Apoptosis of T Cell

Authors: Nahid Eskandari, Marjan Ghagozolo, Ehsan Almasi

Abstract:

Introduction: Silymarin, as a polyphenolic flavonoid derived from milk thistle (Silybum marianum), is known to have antioxidant, immunomodulatory, antiproliferative, antifibrotic, and antiviral effects. The goal of this study was to determine immunosuppressive effect of Silymarin on proliferation and apoptosis of human T cells in comparison with Rapamycin and FK506. Methods: Peripheral Blood Mononuclear Cells (PBMCs) from healthy individuals were activated with Con A (5µg/ml) and then treated with Silymarin, Rapamycin and FK506 in various concentrations (0.001, 0.01, 0.1, 1, 10,100 and 200M) for 5 days. PBMCs were examined for proliferation using CFSE assay and the concentration that inhibited 50% of the cell proliferation (IC50) was determined for each treatment. For apoptosis assay using flow cytometry, PBMCs were activated with Con A and treated with IC50 dose of Silymarin, Rapamycin and FK506 for 5 days, then cell apoptosis was analysed by FITC-annexin V/PI staining and flow cytometry. The effects of Silymarin, Rapamycin and FK506 on the activation of PARP (poly ADP ribose polymerase) pathway in PBMCs stimulated with Con A and treated with IC50 dose of drugs for 5 days evaluated using the PathScan cleaved PARP sandwich ELISA kit. Results: This study showed that Silymarin had the ability to inhibit T cell proliferation in vitro. Moreover, our results indicated that 100 μM (P < 0.001) and 200 μM (P < 0.001) of Silymarin has more inhibitory effect on T cells proliferation than FK506 and Rapamycin. Our data showed that the effective doses (IC50) of Silymarin, FK506 and Rapamycin were 3×10-5 µM, 10-8 µM and 10-6 µM respectively. Data showed that the inhibitory effect of Silymarin, FK506 and Rapamycin on T cell proliferation was not due to cytotoxicity and none of these drugs at IC50 concentration had not affected the level of cleaved PARP. Conclusion: Silymarin could be a good candidate for immunosuppressive therapy for certain medical conditions with superior efficacy and lesser toxicity in comparison with other immunosuppressive drugs.

Keywords: silymarin, immunosuppressive effect, rapamycin, immunology

Procedia PDF Downloads 260
2890 Combat Plastic Entering in Kanpur City, Uttar Pradesh, India Marine Environment

Authors: Arvind Kumar

Abstract:

The city of Kanpur is located in the terrestrial plain area on the bank of the river Ganges and is the second largest city in the state of Uttar Pradesh. The city generates approximately 1400-1600 tons per day of MSW. Kanpur has been known as a major point and non-points-based pollution hotspot for the river Ganges. The city has a major industrial hub, probably the largest in the state, catering to the manufacturing and recycling of plastic and other dry waste streams. There are 4 to 5 major drains flowing across the city, which receive a significant quantity of waste leakage, which subsequently adds to the Ganges flow and is carried to the Bay of Bengal. A river-to-sea flow approach has been established to account for leaked waste into urban drains, leading to the build-up of marine litter. Throughout its journey, the river accumulates plastic – macro, meso, and micro, from various sources and transports it towards the sea. The Ganges network forms the second-largest plastic-polluting catchment in the world, with over 0.12 million tonnes of plastic discharged into marine ecosystems per year and is among 14 continental rivers into which over a quarter of global waste is discarded 3.150 Kilo tons of plastic waste is generated in Kanpur, out of which 10%-13% of plastic is leaked into the local drains and water flow systems. With the Support of Kanpur Municipal Corporation, 1TPD capacity MRF for drain waste management was established at Krishna Nagar, Kanpur & A German startup- Plastic Fisher, was identified for providing a solution to capture the drain waste and achieve its recycling in a sustainable manner with a circular economy approach. The team at Plastic Fisher conducted joint surveys and identified locations on 3 drains at Kanpur using GIS maps developed during the survey. It suggested putting floating 'Boom Barriers' across the drains with a low-cost material, which reduced their cost to only 2000 INR per barrier. The project was built upon the self-sustaining financial model. The project includes activities where a cost-efficient model is developed and adopted for a socially self-inclusive model. The project has recommended the use of low-cost floating boom barriers for capturing waste from drains. This involves a one-time time cost and has no operational cost. Manpower is engaged in fishing and capturing immobilized waste, whose salaries are paid by the Plastic Fisher. The captured material is sun-dried and transported to the designated place, where the shed and power connection, which act as MRF, are provided by the city Municipal corporation. Material aggregation, baling, and transportation costs to end-users are borne by Plastic Fisher as well.

Keywords: Kanpur, marine environment, drain waste management, plastic fisher

Procedia PDF Downloads 64
2889 Non-Linear Load-Deflection Response of Shape Memory Alloys-Reinforced Composite Cylindrical Shells under Uniform Radial Load

Authors: Behrang Tavousi Tehrani, Mohammad-Zaman Kabir

Abstract:

Shape memory alloys (SMA) are often implemented in smart structures as the active components. Their ability to recover large displacements has been used in many applications, including structural stability/response enhancement and active structural acoustic control. SMA wires or fibers can be embedded with composite cylinders to increase their critical buckling load, improve their load-deflection behavior, and reduce the radial deflections under various thermo-mechanical loadings. This paper presents a semi-analytical investigation on the non-linear load-deflection response of SMA-reinforced composite circular cylindrical shells. The cylinder shells are under uniform external pressure load. Based on first-order shear deformation shell theory (FSDT), the equilibrium equations of the structure are derived. One-dimensional simplified Brinson’s model is used for determining the SMA recovery force due to its simplicity and accuracy. Airy stress function and Galerkin technique are used to obtain non-linear load-deflection curves. The results are verified by comparing them with those in the literature. Several parametric studies are conducted in order to investigate the effect of SMA volume fraction, SMA pre-strain value, and SMA activation temperature on the response of the structure. It is shown that suitable usage of SMA wires results in a considerable enhancement in the load-deflection response of the shell due to the generation of the SMA tensile recovery force.

Keywords: airy stress function, cylindrical shell, Galerkin technique, load-deflection curve, recovery stress, shape memory alloy

Procedia PDF Downloads 185
2888 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 39
2887 Navigating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management

Authors: Derrick Mirindi, Frederic Mirindi, Oluwakemi Oshineye

Abstract:

The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.

Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software

Procedia PDF Downloads 67
2886 Mechanical Properties and Microstructural Analyzes of Epoxy Resins Reinforced with Satin Tissue

Authors: Băilă Diana Irinel, Păcurar Răzvan, Păcurar Ancuța

Abstract:

Although the volumes of fibre reinforced polymer composites (FRPs) used for aircraft applications is a relatively small percentage of total use, the materials often find their most sophisticated applications in this industry. In aerospace, the performance criteria placed upon materials can be far greater than in other areas – key aspects are light-weight, high-strength, high-stiffness, and good fatigue resistance. Composites were first used by the military before the technology was applied to commercial planes. Nowadays, composites are widely used, and this has been the result of a gradual direct substitution of metal components followed by the development of integrated composite designs as confidence in FRPs has increased. The airplane uses a range of components made from composites, including the fin and tailplane. In the last years, composite materials are increasingly used in automotive applications due to the improvement of material properties. In the aerospace and automotive sector, the fuel consumption is proportional to the weight of the body of the vehicle. A minimum of 20% of the cost can be saved if it used polymer composites in place of the metal structures and the operating and maintenance costs are alco very low. Glass fiber-epoxy composites are widely used in the making of aircraft and automobile body parts and are not only limited to these fields but also used in ship building, structural applications in civil engineering, pipes for the transport of liquids, electrical insulators in reactors. This article was establish the high-performance of composite material, a type glass-epoxy used in automotive and aeronautic domains, concerning the tensile and flexural tests and SEM analyzes.

Keywords: glass-epoxy composite, traction and flexion tests, SEM analysis, acoustic emission (AE) signals

Procedia PDF Downloads 97
2885 Perceptual and Ultrasound Articulatory Training Effects on English L2 Vowels Production by Italian Learners

Authors: I. Sonia d’Apolito, Bianca Sisinni, Mirko Grimaldi, Barbara Gili Fivela

Abstract:

The American English contrast /ɑ-ʌ/ (cop-cup) is difficult to be produced by Italian learners since they realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively, due to differences in phonetic-phonological systems and also in grapheme-to-phoneme conversion rules. In this paper, we try to answer the following research questions: Can a short training improve the production of English /ɑ-ʌ/ by Italian learners? Is a perceptual training better than an articulatory (ultrasound - US) training? Thus, we compare a perceptual training with an US articulatory one to observe: 1) the effects of short trainings on L2-/ɑ-ʌ/ productions; 2) if the US articulatory training improves the pronunciation better than the perceptual training. In this pilot study, 9 Salento-Italian monolingual adults participated: 3 subjects performed a 1-hour perceptual training (ES-P); 3 subjects performed a 1-hour US training (ES-US); and 3 control subjects did not receive any training (CS). Verbal instructions about the phonetic properties of L2-/ɑ-ʌ/ and L1-/ɔ-a/ and their differences (representation on F1-F2 plane) were provided during both trainings. After these instructions, the ES-P group performed an identification training based on the High Variability Phonetic Training procedure, while the ES-US group performed the articulatory training, by means of US video of tongue gestures in L2-/ɑ-ʌ/ production and dynamic view of their own tongue movements and position using a probe under their chin. The acoustic data were analyzed and the first three formants were calculated. Independent t-tests were run to compare: 1) /ɑ-ʌ/ in pre- vs. post-test respectively; /ɑ-ʌ/ in pre- and post-test vs. L1-/a-ɔ/ respectively. Results show that in the pre-test all speakers realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively. Contrary to CS and ES-P groups, the ES-US group in the post-test differentiates the L2 vowels from those produced in the pre-test as well as from the L1 vowels, although only one ES-US subject produces both L2 vowels accurately. The articulatory training seems more effective than the perceptual one since it favors the production of vowels in the correct direction of L2 vowels and differently from the similar L1 vowels.

Keywords: L2 vowel production, perceptual training, articulatory training, ultrasound

Procedia PDF Downloads 251
2884 Non-Destructive Technique for Detection of Voids in the IC Package Using Terahertz-Time Domain Spectrometer

Authors: Sung-Hyeon Park, Jin-Wook Jang, Hak-Sung Kim

Abstract:

In recent years, Terahertz (THz) time-domain spectroscopy (TDS) imaging method has been received considerable interest as a promising non-destructive technique for detection of internal defects. In comparison to other non-destructive techniques such as x-ray inspection method, scanning acoustic tomograph (SAT) and microwave inspection method, THz-TDS imaging method has many advantages: First, it can measure the exact thickness and location of defects. Second, it doesn’t require the liquid couplant while it is very crucial to deliver that power of ultrasonic wave in SAT method. Third, it didn’t damage to materials and be harmful to human bodies while x-ray inspection method does. Finally, it exhibits better spatial resolution than microwave inspection method. However, this technology couldn’t be applied to IC package because THz radiation can penetrate through a wide variety of materials including polymers and ceramics except of metals. Therefore, it is difficult to detect the defects in IC package which are composed of not only epoxy and semiconductor materials but also various metals such as copper, aluminum and gold. In this work, we proposed a special method for detecting the void in the IC package using THz-TDS imaging system. The IC package specimens for this study are prepared by Packaging Engineering Team in Samsung Electronics. Our THz-TDS imaging system has a special reflection mode called pitch-catch mode which can change the incidence angle in the reflection mode from 10 o to 70 o while the others have transmission and the normal reflection mode or the reflection mode fixed at certain angle. Therefore, to find the voids in the IC package, we investigated the appropriate angle as changing the incidence angle of THz wave emitter and detector. As the results, the voids in the IC packages were successfully detected using our THz-TDS imaging system.

Keywords: terahertz, non-destructive technique, void, IC package

Procedia PDF Downloads 468
2883 Immobilized Iron Oxide Nanoparticles for Stem Cell Reconstruction in Magnetic Particle Imaging

Authors: Kolja Them, Johannes Salamon, Harald Ittrich, Michael Kaul, Tobias Knopp

Abstract:

Superparamagnetic iron oxide nanoparticles (SPIONs) are nanoscale magnets which can be biologically functionalized for biomedical applications. Stem cell therapies to repair damaged tissue, magnetic fluid hyperthermia for cancer therapy and targeted drug delivery based on SPIONs are prominent examples where the visualization of a preferably low concentrated SPION distribution is essential. In 2005 a new method for tomographic SPION imaging has been introduced. The method named magnetic particle imaging (MPI) takes advantage of the nanoparticles magnetization change caused by an oscillating, external magnetic field and allows to directly image the time-dependent nanoparticle distribution. The SPION magnetization can be changed by the electron spin dynamics as well as by a mechanical rotation of the nanoparticle. In this work different calibration methods in MPI are investigated for image reconstruction of magnetically labeled stem cells. It is shown that a calibration using rotationally immobilized SPIONs provides a higher quality of stem cell images with fewer artifacts than a calibration using mobile SPIONs. The enhancement of the image quality and the reduction of artifacts enables the localization and identification of a smaller number of magnetically labeled stem cells. This is important for future medical applications where low concentrations of functionalized SPIONs interacting with biological matter have to be localized.

Keywords: biomedical imaging, iron oxide nanoparticles, magnetic particle imaging, stem cell imaging

Procedia PDF Downloads 452
2882 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 57
2881 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization

Procedia PDF Downloads 127
2880 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation

Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim

Abstract:

Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.

Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time

Procedia PDF Downloads 67
2879 Development of Vacuum Planar Membrane Dehumidifier for Air-Conditioning

Authors: Chun-Han Li, Tien-Fu Yang, Chen-Yu Chen, Wei-Mon Yan

Abstract:

The conventional dehumidification method in air-conditioning system mostly utilizes a cooling coil to remove the moisture in the air via cooling the supply air down below its dew point temperature. During the process, it needs to reheat the supply air to meet the set indoor condition that consumes a considerable amount of energy and affect the coefficient of performance of the system. If the processes of dehumidification and cooling are separated and operated respectively, the indoor conditions will be more efficiently controlled. Therefore, decoupling the dehumidification and cooling processes in heating, ventilation and air conditioning system is one of the key technologies as membrane dehumidification processes for the next generation. The membrane dehumidification method has the advantages of low cost, low energy consumption, etc. It utilizes the pore size and hydrophilicity of the membrane to transfer water vapor by mass transfer effect. The moisture in the supply air is removed by the potential energy and driving force across the membrane. The process can save the latent load used to condense water, which makes more efficient energy use because it does not involve heat transfer effect. In this work, the performance measurements including the permeability and selectivity of water vapor and air with the composite and commercial membranes were conducted. According to measured data, we can choose the suitable dehumidification membrane for designing the flow channel length and components of the planar dehumidifier. The vacuum membrane dehumidification system was set up to examine the effects of temperature, humidity, vacuum pressure, flow rate, the coefficient of performance and other parameters on the dehumidification efficiency. The results showed that the commercial Nafion membrane has better water vapor permeability and selectivity. They are suitable for filtration with water vapor and air. Meanwhile, Nafion membrane has promising potential in the dehumidification process.

Keywords: vacuum membrane dehumidification, planar membrane dehumidifier, water vapour and air permeability, air conditioning

Procedia PDF Downloads 136
2878 Impact of Different Fuel Inlet Diameters onto the NOx Emissions in a Hydrogen Combustor

Authors: Annapurna Basavaraju, Arianna Mastrodonato, Franz Heitmeir

Abstract:

The Advisory Council for Aeronautics Research in Europe (ACARE) is creating awareness for the overall reduction of NOx emissions by 80% in its vision 2020. Hence this promotes the researchers to work on novel technologies, one such technology is the use of alternative fuels. Among these fuels hydrogen is of interest due to its one and only significant pollutant NOx. The influence of NOx formation due to hydrogen combustion depends on various parameters such as air pressure, inlet air temperature, air to fuel jet momentum ratio etc. Appropriately, this research is motivated to investigate the impact of the air to fuel jet momentum ratio onto the NOx formation in a hydrogen combustion chamber for aircraft engines. The air to jet fuel momentum is defined as the ratio of impulse/momentum of air with respect to the momentum of fuel. The experiments were performed in an existing combustion chamber that has been previously tested for methane. Premix of the reactants has not been considered due to the high reactivity of the hydrogen and high risk of a flashback. In order to create a less rich zone of reaction at the burner and to decrease the emissions, a forced internal recirculation flow has been achieved by integrating a plate similar to honeycomb structure, suitable to the geometry of the liner. The liner has been provided with an external cooling system to avoid the increase of local temperatures and in turn the reaction rate of the NOx formation. The injected air has been preheated to aim at so called flameless combustion. The air to fuel jet momentum ratio has been inspected by changing the area of fuel inlets and keeping the number of fuel inlets constant in order to alter the fuel jet momentum, thus maintaining the homogeneity of the flow. Within this analysis, promising results for a flameless combustion have been achieved. For a constant number of fuel inlets, it was seen that the reduction of the fuel inlet diameter resulted in decrease of air to fuel jet momentum ratio in turn lowering the NOx emissions.

Keywords: combustion chamber, hydrogen, jet momentum, NOx emission

Procedia PDF Downloads 286
2877 Parametric Optimization of High-Performance Electric Vehicle E-Gear Drive for Radiated Noise Using 1-D System Simulation

Authors: Sanjai Sureshkumar, Sathish G. Kumar, P. V. V. Sathyanarayana

Abstract:

For e-gear drivetrain, the transmission error and the resulting variation in mesh stiffness is one of the main source of excitation in High performance Electric Vehicle. These vibrations are transferred through the shaft to the bearings and then to the e-Gear drive housing eventually radiating noise. A parametrical model developed in 1-D system simulation by optimizing the micro and macro geometry along with bearing properties and oil filtration to achieve least transmission error and high contact ratio. Histogram analysis is performed to condense the actual road load data into condensed duty cycle to find the bearing forces. The structural vibration generated by these forces will be simulated in a nonlinear solver obtaining the normal surface velocity of the housing and the results will be carried forward to Acoustic software wherein a virtual environment of the surrounding (actual testing scenario) with accurate microphone position will be maintained to predict the sound pressure level of radiated noise and directivity plot of the e-Gear Drive. Order analysis will be carried out to find the root cause of the vibration and whine noise. Broadband spectrum will be checked to find the rattle noise source. Further, with the available results, the design will be optimized, and the next loop of simulation will be performed to build a best e-Gear Drive on NVH aspect. Structural analysis will be also carried out to check the robustness of the e-Gear Drive.

Keywords: 1-D system simulation, contact ratio, e-Gear, mesh stiffness, micro and macro geometry, transmission error, radiated noise, NVH

Procedia PDF Downloads 144
2876 Medical Imaging Fusion: A Teaching-Learning Simulation Environment

Authors: Cristina Maria Ribeiro Martins Pereira Caridade, Ana Rita Ferreira Morais

Abstract:

The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.

Keywords: image fusion, image processing, teaching-learning simulation tool, biomedical engineering education

Procedia PDF Downloads 117
2875 Evaluating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management

Authors: Mirindi Derrick, Mirindi Frederic, Oluwakemi Oshineye

Abstract:

Abstract: The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.

Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software

Procedia PDF Downloads 37
2874 Physico-Chemical Characteristics and Possibilities of Utilization of Elbasan Thermal Waters

Authors: Elvin Çomo, Edlira Tako, Albana Hasimi, Rrapo Ormeni, Olger Gjuzi, Mirela Ndrita

Abstract:

In Albania, only low enthalpy geothermal springs and wells are known, the temperatures of some of them are almost at the upper limits of low enthalpy, reaching over 60°C. These resources can be used to improve the country's energy balance, as well as for profitable economic purposes. The region of Elbasan has the greatest geothermal energy potential in Albania. This bass is one of the most popular and used in our country. This area is a surface with a number of sources, located in the form of a chain, in the sector between Llixha and Hidraj and constitutes a thermo-mineral basin with stable discharge and high temperature. The sources of Elbasan Springs, with the current average flow of thermo mineral water of 12-18 l/s and its temperature 55-65oC, have specific reserves of 39.6 GJ/m2 and potential power to install 2760 kW. For the assessment of physico-chemical parameters and heavy metals, water samples were taken at 5 monitoring stations throughout the year 2022. The levels of basic parameters were analyzed using ISO, EU and APHA 21-th edition standard methods. This study presents the current state of the physico-chemical parameters of this thermal basin, the evaluation of these parameters for curative activities and for industrial processes, as well as the integrated utilization of geothermal energy. Possibilities for using thermomineral waters for heating homes in the area around them or even further, depending on the flow from the source or geothermal well. Sensitization of Albanian investors, medical research and the community for the high economic and curative effectiveness, for the integral use of geothermal energy in this area and the development of the tourist sector. An analysis of the negative environmental impact from the use of thermal water is also provided.

Keywords: geothermal energy, Llixha, physic-chemical parameters, thermal water

Procedia PDF Downloads 121
2873 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method

Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao

Abstract:

In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.

Keywords: defect levels, nanorods, photoluminescence, Raman modes

Procedia PDF Downloads 236
2872 The Adoption of Leagility in Healthcare Services

Authors: Ana L. Martins, Luis Orfão

Abstract:

Healthcare systems have been subject to various research efforts aiming at process improvement under a lean approach. Another perspective, agility, has also been used, though in a lower scale, in order to analyse the ability of different hospital services to adapt to demand uncertainties. Both perspectives have a common denominator, the improvement of effectiveness and efficiency of the services in a healthcare setting context. Mixing the two approached allows, on one hand, to streamline the processes, and on the other hand the required flexibility to deal with demand uncertainty in terms of both volume and variety. The present research aims to analyse the impacts of the combination of both perspectives in the effectiveness and efficiency of an hospital service. The adopted methodology is based on a case study approach applied to the process of the ambulatory surgery service of Hospital de Lamego. Data was collected from direct observations, formal interviews and informal conversations. The analyzed process was selected according to three criteria: relevance of the process to the hospital, presence of human resources, and presence of waste. The customer of the process was identified as well as his perception of value. The process was mapped using flow chart, on a process modeling perspective, as well as through the use of Value Stream Mapping (VSM) and Process Activity Mapping. The Spaghetti Diagram was also used to assess flow intensity. The use of the lean tools enabled the identification of three main types of waste: movement, resource inefficiencies and process inefficiencies. From the use of the lean tools improvement suggestions were produced. The results point out that leagility cannot be applied to the process, but the application of lean and agility in specific areas of the process would bring benefits in both efficiency and effectiveness, and contribute to value creation if improvements are introduced in hospital’s human resources and facilities management.

Keywords: case study, healthcare systems, leagility, lean management

Procedia PDF Downloads 195
2871 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data

Authors: Abhisek Chakrabarty, Subhraprakash Mandal

Abstract:

Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.

Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin

Procedia PDF Downloads 306
2870 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 464
2869 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 163
2868 Study on the Mechanism of CO₂-Viscoelastic Fluid Synergistic Oil Displacement in Tight Sandstone Reservoirs

Authors: Long Long Chen, Xinwei Liao, Shanfa Tang, Shaojing Jiang, Ruijia Tang, Rui Wang, Shu Yun Feng, Si Yao Wang

Abstract:

Tight oil reservoirs have poor physical properties, insufficient formation energy, and low natural productivity; it is necessary to effectively improve their crude oil recovery. CO₂ flooding is an important technical means to enhance oil recovery and achieve effective CO₂ storage in tight oil reservoirs, but its heterogeneity is strong, which makes CO₂ flooding prone to gas channeling and poor recovery. Aiming at the problem of gas injection channeling, combined with the excellent performance of low interfacial tension viscoelastic fluid (GOBTK), the research on CO₂-low interfacial tension viscoelastic fluid synergistic oil displacement in tight reservoirs was carried out, and the synergy of CO₂ and low interfacial tension viscoelastic fluid was discussed. Oil displacement mechanism. Experiments show that GOBTK has good injectability in tight oil reservoirs (Kg=0.141~0.793mD); CO₂-0.4% GOBTK synergistic flooding can improve the recovery factor of low permeability layers (31.41%) under heterogeneous (gradient difference of 10) conditions the) effect is better than that of CO₂ flooding (0.56%) and 0.4% GOBT-water flooding (20.99%); CO₂-GOBT synergistic oil displacement mechanism includes: 1) The formation of CO₂ foam increases the flow resistance of viscoelastic fluid, forcing the displacement fluid to flow 2) GOBTK can emulsify and disperse residual oil into small oil droplets, and smoothly pass through narrow pores to produce; 3) CO₂ dissolved in GOBTK synergistically enhances the water wettability of the core, and the use of viscosity Elastomeric fluid injection and stripping of residual oil; 4) CO₂-GOBTK synergy superimposes multiple mechanisms, effectively improving the swept volume and oil washing efficiency of the injected fluid to the reservoir.

Keywords: tight oil reservoir, CO₂ flooding, low interfacial tension viscoelastic fluid flooding, synergistic oil displacement, EOR mechanism

Procedia PDF Downloads 169
2867 Investigating the Energy Harvesting Potential of a Pitch-Plunge Airfoil Subjected to Fluctuating Wind

Authors: Magu Raam Prasaad R., Venkatramani Jagadish

Abstract:

Recent studies in the literature have shown that randomly fluctuating wind flows can give rise to a distinct regime of pre-flutter oscillations called intermittency. Intermittency is characterized by the presence of sporadic bursts of high amplitude oscillations interspersed amidst low-amplitude aperiodic fluctuations. The focus of this study is on investigating the energy harvesting potential of these intermittent oscillations. Available literature has by and large devoted its attention on extracting energy from flutter oscillations. The possibility of harvesting energy from pre-flutter regimes have remained largely unexplored. However, extracting energy from violent flutter oscillations can be severely detrimental to the structural integrity of airfoil structures. Consequently, investigating the relatively stable pre-flutter responses for energy extraction applications is of practical importance. The present study is devoted towards addressing these concerns. A pitch-plunge airfoil with cubic hardening nonlinearity in the plunge and pitch degree of freedom is considered. The input flow fluctuations are modelled using a sinusoidal term with randomly perturbed frequencies. An electromagnetic coupling is provided to the pitch-plunge equations, such that, energy from the wind induced vibrations of the structural response are extracted. With the mean flow speed as the bifurcation parameter, a fourth order Runge-Kutta based time marching algorithm is used to solve the governing aeroelastic equations with electro-magnetic coupling. The harnessed energy from the intermittency regime is presented and the results are discussed in comparison to that obtained from the flutter regime. The insights from this study could be useful in health monitoring of aeroelastic structures.

Keywords: aeroelasticity, energy harvesting, intermittency, randomly fluctuating flows

Procedia PDF Downloads 180
2866 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound

Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo

Abstract:

Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.

Keywords: mitral regurgitation, vena contracta, color doppler, image processing

Procedia PDF Downloads 364
2865 Flood Mapping and Inoudation on Weira River Watershed (in the Case of Hadiya Zone, Shashogo Woreda)

Authors: Alilu Getahun Sulito

Abstract:

Exceptional floods are now prevalent in many places in Ethiopia, resulting in a large number of human deaths and property destruction. Lake Boyo watershed, in particular, had also traditionally been vulnerable to flash floods throughout the Boyo watershed. The goal of this research is to create flood and inundation maps for the Boyo Catchment. The integration of Geographic information system(GIS) technology and the hydraulic model (HEC-RAS) were utilized as methods to attain the objective. The peak discharge was determined using Fuller empirical methodology for intervals of 5, 10, 15, and 25 years, and the results were 103.2 m3/s, 158 m3/s, 222 m3/s, and 252 m3/s, respectively. River geometry, boundary conditions, manning's n value of varying land cover, and peak discharge at various return periods were all entered into HEC-RAS, and then an unsteady flow study was performed. The results of the unsteady flow study demonstrate that the water surface elevation in the longitudinal profile rises as the different periods increase. The flood inundation charts clearly show that regions on the right and left sides of the river with the greatest flood coverage were 15.418 km2 and 5.29 km2, respectively, flooded by 10,20,30, and 50 years. High water depths typically occur along the main channel and progressively spread to the floodplains. The latest study also found that flood-prone areas were disproportionately affected on the river's right bank. As a result, combining GIS with hydraulic modelling to create a flood inundation map is a viable solution. The findings of this study can be used to care again for the right bank of a Boyo River catchment near the Boyo Lake kebeles, according to the conclusion. Furthermore, it is critical to promote an early warning system in the kebeles so that people can be evacuated before a flood calamity happens. Keywords: Flood, Weira River, Boyo, GIS, HEC- GEORAS, HEC- RAS, Inundation Mapping

Keywords: Weira River, Boyo, GIS, HEC- GEORAS, HEC- RAS, Inundation Mapping

Procedia PDF Downloads 42
2864 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 337
2863 The Use of Complementary and Alternative Medicine for Pain Relief in the Elderly: An Investigational Analysis of Seniors Residing in an Independent/Assisted Seniors’ Living Facility

Authors: Carol Cameletti

Abstract:

The goal of this study was to perform a pilot survey to assess pain frequency and intensity in an elderly population and to assess treatment options for chronic pain that include complementary and alternative medicines (CAM). Ten participants were recruited from an independent and supportive living housing facility in Northern Ontario and asked to complete two questionnaires: 1) a self-assessment on pain, and 2) the use of CAM for pain. Results from our study show that 80% of the participants experienced pains other than the regular everyday pains such as minor headaches, sprains or toothaches. Although participants stated that on average the highest level of pain they experienced within the past 24 hours had a score of 6.5 (0=no pain, 10=worst pain imaginable) the level of pain they experienced moderately interfered with their daily activities. Unfortunately, participants stated that they were only able to attain minimal levels of pain relief using treatments or medications causing some of the participants to seek alternative therapies or self-help practices. The most commonly used CAMs were vitamins/minerals, herbs and supplements, and self-help practices such as meditation, prayer, visualization and relaxation techniques. Although some of the participants stated that they had received complementary treatments directly from their physician, four of the nine participants said that they had not disclosed CAM use to their physician thereby indicating a need to open the lines of communication between healthcare providers and patients with regards to CAM use. It is our hope that the data generated from this study will serve as the platform for a pain management clinic that is client-centered, consumer-driven and truly integrative and tailored in order to meet the unique needs of older adults in Great Sudbury, Ontario.

Keywords: alternative, complementary, elderly, medicine

Procedia PDF Downloads 175