Search results for: generalized estimating equations
419 Powerful Media: Reflection of Professional Audience
Authors: Hamide Farshad, Mohammadreza Javidi Abdollah Zadeh Aval
Abstract:
As a result of the growing penetration of the media into human life, a new role under the title of "audience" is defined in the social life .A kind of role which is dramatically changed since its formation. This article aims to define the audience position in the new media equations which is concluded to the transformation of the media role. By using the Library and Attributive method to study the history, the evolutionary outlook to the audience and the recognition of the audience and the media relation in the new media context is studied. It was perceived in past that public communication would result in receiving the audience. But after the emergence of the interactional media and transformation in the audience social life, a new kind of public communication is formed, and also the imaginary picture of the audience is replaced by the audience impact on the communication process. Part of this impact can be seen in the form of feedback which is one of the public communication elements. In public communication, the audience feedback is completely accepted. But in many cases, and along with the audience feedback, the media changes its direction; this direction shift is known as media feedback. At this state, the media and the audience are both doers and consistently change their positions in an interaction. With the greater number of the audience and the media, this process has taken a new role, and the role of this doer is sometimes taken by an audience while influencing another audience, or a media while influencing another media. In this article, this multiple public communication process is shown through representing a model under the title of ”The bilateral influence of the audience and the media.” Based on this model, the audience and the media power are not the two sides of a coin, and as a result, by accepting these two as the doers, the bilateral power of the audience and the media will be complementary to each other. Also more, the compatibility between the media and the audience is analyzed in the bilateral and interactional relation hypothesis, and by analyzing the action law hypothesis, the dos and don’ts of this role are defined, and media is obliged to know and accept them in order to be able to survive. They also have a determining role in the strategic studies of a media.Keywords: audience, effect, media, interaction, action laws
Procedia PDF Downloads 486418 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 224417 Cultural Psychology in Sports: How Understanding Culture May Help Sports Psychologists Augment Athletic Performance
Authors: Upasana Ranjib
Abstract:
Sports psychology, as a niche area, has, since the last two decades, found for itself a space within the outer peripheries of the discipline of traditional psychology. It has aimed to understand the many variables that push athletes to enhance their performances. While sociological aspects have been duly represented in academia, little has been written about the role of culture in shaping the psyche of athletes. The impact that cultures of different communities and societies have towards specifics like gender, castes, religion and race and how that helps evolve an individual has not been fully addressed. In the case of Sport, culture has made itself felt in the form of stereotypes, traditional outlooks towards sects and its implication on the engagement with sports. Culture is an environment that an individual imbibes. It is what shapes him, physically as well as mentally. Their nurture and nature both stem from it and depend on it. To realize the linkages between their nurture, nature and sports efficiency, cultural studies must collaborate in scholarship with psychology and practical sports. Cultural sports psychology would allow sports psychologists, coaches and even athletes themselves to understand the behavioural variations that affect their performance. The variations in the performance of athletes from different cultures and countries could be attributed to their socio-political, economic and environmental differences. These cultural influences shape and impact the athlete's behaviour and might lead as a gateway to understanding their skill sets and internal motivational factors. With that knowledge in mind, this paper aims to understand and reflect on how, in the present times of heavy sporting competition, shifting cultural equations and changing world dynamics, it is mandatory to infuse Cultural Studies with Sports Psychology to understand how Sports Psychologists can help and augment the performances of athletes.Keywords: sporting performance, Asian sports, sports psychology, cultural psychology, society
Procedia PDF Downloads 90416 Comparing Business Excellence Models Using Quantitative Methods: A First Step
Authors: Mohammed Alanazi, Dimitrios Tsagdis
Abstract:
Established Business Excellence Models (BEMs), like the Malcolm Baldrige National Quality Award (MBNQA) model and the European Foundation for Quality Management (EFQM) model, have been adopted by firms all over the world. They exist alongside more recent country-specific BEMs; e.g. the Australian, Canadian, China, New Zealand, Singapore, and Taiwan quality awards that although not as widespread as MBNQA and EFQM have nonetheless strong national followings. Regardless of any differences in their following or prestige, the emergence and development of all BEMs have been shaped both by their local context (e.g. underlying socio-economic dynamics) as well as by global best practices. Besides such similarities, that render them into objects (i.e. models) of the same class (i.e. BEMs), BEMs exhibit non-trivial differences in their criteria, relations, and emphasis. Given the evolution of BEMs (e.g. the MBNQA underwent seven evolutions since its inception in 1987 while the EFQM five since 1993), it is unsurprising that comparative studies of their validity are few and far in between. This poses challenges for practitioners and policy makers alike; as it is not always clear which BEM is to be preferred or better fitting to a particular context. Especially, in contexts that differ substantially from the original context of BEM development. This paper aims to fill this gap by presenting a research design and measurement model for comparing BEMs using quantitative methods (e.g. structural equations). Three BEMs will be focused upon in particular for illustration purposes; the MBNQA, the EFQM, and the King Abdul Aziz Quality Award (KAQA) model. They have been selected so to reflect the two established and widely spread traditions as well as a more recent context-specific arrival promising a better fit.Keywords: Baldrige, business excellence, European Foundation for Quality Management, Structural Equation Model, total quality management
Procedia PDF Downloads 236415 Accelerating Molecular Dynamics Simulations of Electrolytes with Neural Network: Bridging the Gap between Ab Initio Molecular Dynamics and Classical Molecular Dynamics
Authors: Po-Ting Chen, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
Classical molecular dynamics (CMD) simulations are highly efficient for material simulations but have limited accuracy. In contrast, ab initio molecular dynamics (AIMD) provides high precision by solving the Kohn–Sham equations yet requires significant computational resources, restricting the size of systems and time scales that can be simulated. To address these challenges, we employed NequIP, a machine learning model based on an E(3)-equivariant graph neural network, to accelerate molecular dynamics simulations of a 1M LiPF6 in EC/EMC (v/v 3:7) for Li battery applications. AIMD calculations were initially conducted using the Vienna Ab initio Simulation Package (VASP) to generate highly accurate atomic positions, forces, and energies. This data was then used to train the NequIP model, which efficiently learns from the provided data. NequIP achieved AIMD-level accuracy with significantly less training data. After training, NequIP was integrated into the LAMMPS software to enable molecular dynamics simulations of larger systems over longer time scales. This method overcomes the computational limitations of AIMD while improving the accuracy limitations of CMD, providing an efficient and precise computational framework. This study showcases NequIP’s applicability to electrolyte systems, particularly for simulating the dynamics of LiPF6 ionic mixtures. The results demonstrate substantial improvements in both computational efficiency and simulation accuracy, highlighting the potential of machine learning models to enhance molecular dynamics simulations.Keywords: lithium-ion batteries, electrolyte simulation, molecular dynamics, neural network
Procedia PDF Downloads 11414 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome
Authors: Agada N. Ihuoma, Nagata Yasunori
Abstract:
Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.Keywords: artificial Intelligence, backward elimination, linear regression, solar energy
Procedia PDF Downloads 155413 Assessment and Evaluation Resilience of Urban Neighborhoods in Coping with Natural Disasters in in the Metropolis of Tabriz (Case Study: Region 6 of Tabriz)
Authors: Ali panahi-Kosar Khosravi
Abstract:
Earthquake resilience is one of the most important theoretical and practical concepts in crisis management. Over the past few decades, the rapid growth of urban areas and developing lower urban areas (especially in developing countries) have made them more vulnerable to human and natural crises. Therefore, the resilience of urban communities, especially low-income and unhealthy neighborhoods, is of particular importance. The present study seeks to assess and evaluate the resilience of neighborhoods in the center of district 6 of Tabriz in terms of awareness, knowledge and personal skills, social and psychological capital, managerial-institutional, and the ability to return to appropriate and sustainable conditions. The research method in this research is descriptive-analytical. The authors used library and survey methods to collect information and a questionnaire to assess resilience. The statistical population of this study is the total households living in the four neighborhoods of Shanb Ghazan, Khatib, Gharamalek, and Abuzar alley. Three hundred eighty-four families from four neighborhoods were selected based on the Cochran formula using a simple random sampling method. A one-sample t-test, simple linear regression, and structural equations were used to test the research hypotheses. Findings showed that only two social and psychological awareness and capital indicators in district 6 of Tabriz had a favorable and approved status. Therefore, considering the multidimensional concept of resilience, district 6 of Tabriz is in an unfavorable resilience situation. Also, the findings based on the analysis of variance indicated no significant difference between the neighborhoods of district 6 in terms of resilience, and most neighborhoods are in an unfavorable situation.Keywords: resilience, statistical analysis, earthquake, district 6 of tabriz
Procedia PDF Downloads 76412 Total Organic Carbon, Porosity and Permeability Correlation: A Tool for Carbon Dioxide Storage Potential Evaluation in Irati Formation of the Parana Basin, Brazil
Authors: Richardson M. Abraham-A., Colombo Celso Gaeta Tassinari
Abstract:
The correlation between Total Organic Carbon (TOC) and flow units have been carried out to predict and compare the carbon dioxide (CO2) storage potential of the shale and carbonate rocks in Irati Formation of the Parana Basin. The equations for permeability (K), reservoir quality index (RQI) and flow zone indicator (FZI) are redefined and engaged to evaluate the flow units in both potential reservoir rocks. Shales show higher values of TOC compared to carbonates, as such, porosity (Ф) is most likely to be higher in shales compared to carbonates. The increase in Ф corresponds to the increase in K (in both rocks). Nonetheless, at lower values of Ф, K is higher in carbonates compared to shales. This shows that at lower values of TOC in carbonates, Ф is low, yet, K is likely to be high compared to shale. In the same vein, at higher values of TOC in shales, Ф is high, yet, K is expected to be low compared to carbonates. Overall, the flow unit factors (RQI and FZI) are better in the carbonates compared to the shales. Moreso, within the study location, there are some portions where the thicknesses of the carbonate units are higher compared to the shale units. Most parts of the carbonate strata in the study location are fractured in situ, hence, this could provide easy access for the storage of CO2. Therefore, based on these points and the disparities between the flow units in the evaluated rock types, the carbonate units are expected to show better potentials for the storage of CO2. The shale units may be considered as potential cap rocks or seals.Keywords: total organic content, flow units, carbon dioxide storage, geologic structures
Procedia PDF Downloads 162411 Major Sucking Pests of Rose and Their Seasonal Abundance in Bangladesh
Authors: Md Ruhul Amin
Abstract:
This study was conducted in the experimental field of the Department of Entomology, Bangabandhu Sheikh Mujibur Rahman Agricultural University, Gazipur, Bangladesh during November 2017 to May 2018 with a view to understanding the seasonal abundance of the major sucking pests namely thrips, aphid and red spider mite on rose. The findings showed that the thrips started to build up their population from the middle of January with abundance 1.0 leaf⁻¹, increased continuously, reached to the peak level (2.6 leaf⁻¹) in the middle of February and then declined. Aphid started to build up their population from the second week of November with abundance 6.0 leaf⁻¹, increased continuously, reached to the peak level (8.4 leaf⁻¹) in the last week of December and then declined. Mite started to build up their population from the first week of December with abundance 0.8 leaf⁻¹, increased continuously, reached to the peak level (8.2 leaf⁻¹) in the second week of March and then declined. Thrips and mite prevailed until the last week of April, and aphid showed their abundance till last week of May. The daily mean temperature, relative humidity, and rainfall had an insignificant negative correlation with thrips and significant negative correlation with aphid abundance. The daily mean temperature had significant positive, relative humidity had an insignificant positive, and rainfall had an insignificant negative correlation with mite abundance. The multiple linear regression analysis showed that the weather parameters together contributed 38.1, 41.0 and 8.9% abundance on thrips, aphid and mite on rose, respectively and the equations were insignificant.Keywords: aphid, mite, thrips, weather factors
Procedia PDF Downloads 159410 Anonymity and Irreplaceability: Gross Anatomical Practices in Japanese Medical Education
Authors: Ayami Umemura
Abstract:
Without exception, all the bodies dissected in the gross anatomical practices are bodies that have lived irreplaceable lives, laughing and talking with family and friends. While medical education aims to cultivate medical knowledge that is universally applicable to all human bodies, it relies on a unique, irreplaceable, and singular entity. In this presentation, we will explore the ``irreplaceable relationship'' that is cultivated between medical students and anonymous cadavers during gross anatomical practices, drawing on Emmanuel Levinas's ``ethics of the face'' and Martin Buber's discussion of “I-Thou.'' Through this, we aim to present ``a different ethic'' that emerges only in the context of face-to-face relationships, which differs from the generalized, institutionalized, mass-produced ethics like seen in so-called ``ethics codes.'' Since the 1990s, there has been a movement around the world to use gross anatomical practices as an "educational tool" for medical professionalism and medical ethics, and some educational institutions have started disclosing the actual names, occupations, and places of birth of corpses to medical students. These efforts have also been criticized because they lack medical calmness. In any case, the issue here is that this information is all about the past that medical students never know directly. The critical fact that medical students are building relationships from scratch and spending precious time together without any information about the corpses before death is overlooked. Amid gross anatomical practices, a medical student is exposed to anonymous cadavers with faces and touching and feeling them. In this presentation, we will examine a collection of essays written by medical students on gross anatomical practices collected by the Japanese Association for Volunteer Body Donation from medical students across the country since 1978. There, we see the students calling out to the corpse, being called out to, being encouraged, superimposing the carcasses on their own immediate family, regretting parting, and shedding tears. Then, medical students can be seen addressing the dead body in the second person singular, “you.” These behaviors reveal an irreplaceable relationship between the anonymous cadavers and the medical students. The moment they become involved in an irreplaceable relationship between “I and you,” an accidental and anonymous encounter becomes inevitable. When medical students notice being the inevitable takers of voluntary and addressless gifts, they pledge to become “Good Doctors” owing the anonymous persons. This presentation aims to present “a different ethic” based on uniqueness and irreplaceability that comes from the faces of the others embedded in each context, which is different from “routine” and “institutionalized” ethics. That can only be realized ``because of anonymity''.Keywords: anonymity, irreplaceability, uniqueness, singularlity, emanuel levinas, martin buber, alain badiou, medical education
Procedia PDF Downloads 60409 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach
Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené
Abstract:
Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.Keywords: agency theory, credit risk, internal controls, revised COSO framework
Procedia PDF Downloads 315408 Impact of Nanoparticles in Enhancement of Thermal Conductivity of Phase Change Materials in Thermal Energy Storage and Cooling of Concentrated Photovoltaics
Authors: Ismaila H. Zarma, Mahmoud Ahmed, Shinichi Ookawara, Hamdi Abo-Ali
Abstract:
Phase change materials (PCM) are an ideal thermal storage medium. They are characterized by a high latent heat, which allows them to store large amounts of energy when the material transitions into different physical states. Concentrated photovoltaic (CPV) systems are widely recognized as the most efficient form of Photovoltaic (PV) for thermal energy which can be stored in Phase Change Materials (PCM). However, PCMs often have a low thermal conductivity which leads to a slow transient response. This makes it difficult to quickly store and access the energy stored within the PCM based systems, so there is need to improve transient responses and increase the thermal conductivity. The present study aims to investigate and analyze the melting and solidification process of phase change materials (PCMs) enhanced by nanoparticle contained in a container. Heat flux from concentrated photovoltaic is applied in an attempt to analyze the thermal performance and the impact of nanoparticles. The work will be realized by using a two dimensional model which take into account the phase change phenomena based on the principle of enthalpy method. Numerical simulations have been performed to investigate heat and flow characteristics by using governing equations, to ascertain the impacts of the nanoparticle loading. The Rayleigh number, sub-cooling as well as the unsteady evolution of the melting front and the velocity and temperature fields were also observed. The predicted results exhibited a good agreement, showing thermal enhancement due to present of nanoparticle which leads to decreasing the melting time.Keywords: thermal energy storage, phase-change material, nanoparticle, concentrated photovoltaic
Procedia PDF Downloads 202407 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture
Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz
Abstract:
Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV
Procedia PDF Downloads 107406 3-D Numerical Simulation of Scraped Surface Heat Exchanger with Helical Screw
Authors: Rabeb Triki, Hassene Djemel, Mounir Baccar
Abstract:
Surface scraping is a passive heat transfer enhancement technique that is directly used in scraped surface heat exchanger (SSHE). The scraping action prevents the accumulation of the product on the inner wall, which intensifies the heat transfer and avoids the formation of dead zones. SSHEs are widely used in industry for several applications such as crystallization, sterilization, freezing, gelatinization, and many other continuous processes. They are designed to deal with products that are viscous, sticky or that contain particulate matter. This research work presents a three-dimensional numerical simulation of the coupled thermal and hydrodynamic behavior within a SSHE which includes Archimedes’ screw instead of scraper blades. The finite volume Fluent 15.0 was used to solve continuity, momentum and energy equations using multiple reference frame formulation. The process fluid investigated under this study is the pure glycerin. Different geometrical parameters were studied in the case of steady, non-isothermal, laminar flow. In particular, attention is focused on the effect of the conicity of the rotor and the pitch of Archimedes’ screw on temperature and velocity distribution and heat transfer rate. Numerical investigations show that the increase of the number of turns in the screw from five to seven turns leads to amelioration of heat transfer coefficient, and the increase of the conicity of the rotor from 0.1 to 0.15 leads to an increase in the rate of heat transfer. Further studies should investigate the effect of different operating parameters (axial and rotational Reynolds number) on the hydrodynamic and thermal behavior of the SSHE.Keywords: ANSYS-Fluent, hydrodynamic behavior, scraped surface heat exchange, thermal behavior
Procedia PDF Downloads 157405 Biophysical Modeling of Anisotropic Brain Tumor Growth
Authors: Mutaz Dwairy
Abstract:
Solid tumors have high interstitial fluid pressure (IFP), high mechanical stress, and low oxygen levels. Solid stresses may induce apoptosis, stimulate the invasiveness and metastasis of cancer cells, and lower their proliferation rate, while oxygen concentration may affect the response of cancer cells to treatment. Although tumors grow in a nonhomogeneous environment, many existing theoretical models assume homogeneous growth and tissue has uniform mechanical properties. For example, the brain consists of three primary materials: white matter, gray matter, and cerebrospinal fluid (CSF). Therefore, tissue inhomogeneity should be considered in the analysis. This study established a physical model based on convection-diffusion equations and continuum mechanics principles. The model considers the geometrical inhomogeneity of the brain by including the three different matters in the analysis: white matter, gray matter, and CSF. The model also considers fluid-solid interaction and explicitly describes the effect of mechanical factors, e.g., solid stresses and IFP, chemical factors, e.g., oxygen concentration, and biological factors, e.g., cancer cell concentration, on growing tumors. In this article, we applied the model on a brain tumor positioned within the white matter, considering the brain inhomogeneity to estimate solid stresses, IFP, the cancer cell concentration, oxygen concentration, and the deformation of the tissues within the neoplasm and the surrounding. Tumor size was estimated at different time points. This model might be clinically crucial for cancer detection and treatment planning by measuring mechanical stresses, IFP, and oxygen levels in the tissue.Keywords: biomechanical model, interstitial fluid pressure, solid stress, tumor microenvironment
Procedia PDF Downloads 45404 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries
Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman
Abstract:
There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems
Procedia PDF Downloads 146403 Non-Linear Load-Deflection Response of Shape Memory Alloys-Reinforced Composite Cylindrical Shells under Uniform Radial Load
Authors: Behrang Tavousi Tehrani, Mohammad-Zaman Kabir
Abstract:
Shape memory alloys (SMA) are often implemented in smart structures as the active components. Their ability to recover large displacements has been used in many applications, including structural stability/response enhancement and active structural acoustic control. SMA wires or fibers can be embedded with composite cylinders to increase their critical buckling load, improve their load-deflection behavior, and reduce the radial deflections under various thermo-mechanical loadings. This paper presents a semi-analytical investigation on the non-linear load-deflection response of SMA-reinforced composite circular cylindrical shells. The cylinder shells are under uniform external pressure load. Based on first-order shear deformation shell theory (FSDT), the equilibrium equations of the structure are derived. One-dimensional simplified Brinson’s model is used for determining the SMA recovery force due to its simplicity and accuracy. Airy stress function and Galerkin technique are used to obtain non-linear load-deflection curves. The results are verified by comparing them with those in the literature. Several parametric studies are conducted in order to investigate the effect of SMA volume fraction, SMA pre-strain value, and SMA activation temperature on the response of the structure. It is shown that suitable usage of SMA wires results in a considerable enhancement in the load-deflection response of the shell due to the generation of the SMA tensile recovery force.Keywords: airy stress function, cylindrical shell, Galerkin technique, load-deflection curve, recovery stress, shape memory alloy
Procedia PDF Downloads 188402 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region
Authors: Ahmed O. Apampa, Yinusa A. Jimoh
Abstract:
The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission
Procedia PDF Downloads 371401 Investigation of Free Vibrations of Opened Shells from Alloy D19: Assistance of the Associated Mass System
Authors: Oleg Ye Sysoyev, Artem Yu Dobryshkin, Nyein Sitt Naing
Abstract:
Cylindrical shells are widely used in the construction of buildings and structures, as well as in the air structure. Thin-walled casings made of aluminum alloys are an effective substitute for reinforced concrete and steel structures in construction. The correspondence of theoretical calculations and the actual behavior of aluminum alloy structures is to ensure their trouble-free operation. In the laboratory of our university, "Building Constructions" conducted an experimental study to determine the effect of the system of attached masses on the natural oscillations of shallow cylindrical shells of aluminum alloys, the results of which were compared with theoretical calculations. The purpose of the experiment is to measure the free oscillations of an open, sloping cylindrical shell for various variations of the attached masses. Oscillations of an open, slender, thin-walled cylindrical shell, rectangular in plan, were measured using induction accelerometers. The theoretical calculation of the shell was carried out on the basis of the equations of motion of the theory of shallow shells, using the Bubnov-Galerkin method. A significant splitting of the flexural frequency spectrum is found, influenced not only by the systems of attached маsses but also by the values of the wave formation parameters, which depend on the relative geometric dimensions of the shell. The correspondence of analytical and experimental data is found, using the example of an open shell of alloy D19, which allows us to speak about the high quality of the study. A qualitative new analytical solution of the problem of determining the value of the oscillation frequency of the shell, carrying a system of attached masses is shown.Keywords: open hollow shell, nonlinear oscillations, associated mass, frequency
Procedia PDF Downloads 295400 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area
Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos
Abstract:
We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.Keywords: computational fluid dynamics, extreme events, loading, tsunami
Procedia PDF Downloads 114399 Hemodynamics of a Cerebral Aneurysm under Rest and Exercise Conditions
Authors: Shivam Patel, Abdullah Y. Usmani
Abstract:
Physiological flow under rest and exercise conditions in patient-specific cerebral aneurysm models is numerically investigated. A finite-volume based code with BiCGStab as the linear equation solver is used to simulate unsteady three-dimensional flow field through the incompressible Navier-Stokes equations. Flow characteristics are first established in a healthy cerebral artery for both physiological conditions. The effect of saccular aneurysm on cerebral hemodynamics is then explored through a comparative analysis of the velocity distribution, nature of flow patterns, wall pressure and wall shear stress (WSS) against the reference configuration. The efficacy of coil embolization as a potential strategy of surgical intervention is also examined by modelling coil as a homogeneous and isotropic porous medium where the extended Darcy’s law, including Forchheimer and Brinkman terms, is applicable. The Carreau-Yasuda non-Newtonian blood model is incorporated to capture the shear thinning behavior of blood. Rest and exercise conditions correspond to normotensive and hypertensive blood pressures respectively. The results indicate that the fluid impingement on the outer wall of the arterial bend leads to abnormality in the distribution of wall pressure and WSS, which is expected to be the primary cause of the localized aneurysm. Exercise correlates with elevated flow velocity, vortex strength, wall pressure and WSS inside the aneurysm sac. With the insertion of coils in the aneurysm cavity, the flow bypasses the dilatation, leading to a decline in flow velocities and WSS. Particle residence time is observed to be lower under exercise conditions, a factor favorable for arresting plaque deposition and combating atherosclerosis.Keywords: 3D FVM, Cerebral aneurysm, hypertension, coil embolization, non-Newtonian fluid
Procedia PDF Downloads 230398 Estimating the Efficiency of a Meta-Cognitive Intervention Program to Reduce the Risk Factors of Teenage Drivers with Attention Deficit Hyperactivity Disorder While Driving
Authors: Navah Z. Ratzon, Talia Glick, Iris Manor
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a chronic disorder that affects the sufferer’s functioning throughout life and in various spheres of activity, including driving. Difficulties in cognitive functioning and executive functions are often part and parcel of the ADHD diagnosis, and thus form a risk factor in driving. Studies examining the effectiveness of intervention programs for improving and rehabilitating driving in typical teenagers have been conducted in relatively small numbers; while studies on similar programs for teenagers with ADHD have been especially scarce. The aim of the present study has been to examine the effectiveness of a metacognitive occupational therapy intervention program for reducing risk factors in driving among teenagers with ADHD. The present study included 37 teenagers aged 17 to 19. They included 23 teenagers with ADHD divided into experimental (11) and control (12) groups; as well as 14 non-ADHD teenagers forming a second control group. All teenagers taking part in the study were examined in the Tel Aviv University driving lab, and underwent cognitive diagnoses and a driving simulator test. Every subject in the intervention group took part in 3 assessment meetings, and two metacognitive treatment meetings. The control groups took part in two assessment meetings with a follow-up meeting 3 months later. In all the study’s groups, the treatment’s effectiveness was tested by comparing monitoring results on the driving simulator at the first and second evaluations. In addition, the driving of 5 subjects from the intervention group was monitored continuously from a month prior to the start of the intervention, a month during the phase of the intervention and another month until the end of the intervention. In the ADHD control group, the driving of 4 subjects was monitored from the end of the first evaluation for a period of 3 months. The study’s findings were affected by the fact that the ADHD control group was different from the two other groups, and exhibited ADHD characteristics manifested by impaired executive functions and lower metacognitive abilities relative to their peers. The study found partial, moderate, non-significant correlations between driving skills and cognitive functions, executive functions, and perceptions and attitudes towards driving. According to the driving simulator test results and the limited sampling results of actual driving, it was found that a metacognitive occupational therapy intervention may be effective in reducing risk factors in driving among teenagers with ADHD relative to their peers with and without ADHD. In summary, the results of the present study indicate a positive direction that speaks to the viability of using a metacognitive occupational therapy intervention program for reducing risk factors in driving. A further study is required that will include a bigger number of subjects, add actual driving monitoring hours, and assign subjects randomly to the various groups.Keywords: ADHD, driving, driving monitoring, metacognitive intervention, occupational therapy, simulator, teenagers
Procedia PDF Downloads 305397 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions
Authors: Peter Carter
Abstract:
Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”Keywords: climate change, climate change indicators, climate change trends, climate system change interactions
Procedia PDF Downloads 100396 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 408395 Magnetized Cellulose Nanofiber Extracted from Natural Resources for the Application of Hexavalent Chromium Removal Using the Adsorption Method
Authors: Kebede Gamo Sebehanie, Olu Emmanuel Femi, Alberto Velázquez Del Rosario, Abubeker Yimam Ali, Gudeta Jafo Muleta
Abstract:
Water pollution is one of the most serious worldwide issues today. Among water pollution, heavy metals are becoming a concern to the environment and human health due to their non-biodegradability and bioaccumulation. In this study, a magnetite-cellulose nanocomposite derived from renewable resources is employed for hexavalent chromium elimination by adsorption. Magnetite nanoparticles were synthesized directly from iron ore using solvent extraction and co-precipitation technique. Cellulose nanofiber was extracted from sugarcane bagasse using the alkaline treatment and acid hydrolysis method. Before and after the adsorption process, the MNPs-CNF composites were evaluated using X-ray diffraction (XRD), Scanning electron microscope (SEM), Fourier transform infrared (FTIR), and Vibrator sample magnetometer (VSM), and Thermogravimetric analysis (TGA). The impacts of several parameters such as pH, contact time, initial pollutant concentration, and adsorbent dose on adsorption efficiency and capacity were examined. The kinetic and isotherm adsorption of Cr (VI) was also studied. The highest removal was obtained at pH 3, and it took 80 minutes to establish adsorption equilibrium. The Langmuir and Freundlich isotherm models were used, and the experimental data fit well with the Langmuir model, which has a maximum adsorption capacity of 8.27 mg/g. The kinetic study of the adsorption process using pseudo-first-order and pseudo-second-order equations revealed that the pseudo-second-order equation was more suited for representing the adsorption kinetic data. Based on the findings, pure MNPs and MNPs-CNF nanocomposites could be used as effective adsorbents for the removal of Cr (VI) from wastewater.Keywords: magnetite-cellulose nanocomposite, hexavalent chromium, adsorption, sugarcane bagasse
Procedia PDF Downloads 127394 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle
Authors: W. Hamaidia, T. Zebbiche
Abstract:
This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error
Procedia PDF Downloads 149393 Numerical investigation of Hydrodynamic and Parietal Heat Transfer to Bingham Fluid Agitated in a Vessel by Helical Ribbon Impeller
Authors: Mounir Baccar, Amel Gammoudi, Abdelhak Ayadi
Abstract:
The efficient mixing of highly viscous fluids is required for many industries such as food, polymers or paints production. The homogeneity is a challenging operation for this fluids type since they operate at low Reynolds number to reduce the required power of the used impellers. Particularly, close-clearance impellers, mainly helical ribbons, are chosen for highly viscous fluids agitated in laminar regime which is currently heated through vessel wall. Indeed, they are characterized by high shear strains closer to the vessel wall, which causes a disturbing thermal boundary layer and ensures the homogenization of the bulk volume by axial and radial vortices. The hydrodynamic and thermal behaviors of Newtonian fluids in vessels agitated by helical ribbon impellers, has been mostly studied by many researchers. However, rarely researchers investigated numerically the agitation of yield stress fluid by means of helical ribbon impellers. This paper aims to study the effect of the Double Helical Ribbon (DHR) stirrers on both the hydrodynamic and the thermal behaviors of yield stress fluids treated in a cylindrical vessel by means of numerical simulation approach. For this purpose, continuity, momentum, and thermal equations were solved by means of 3D finite volume technique. The effect of Oldroyd (Od) and Reynolds (Re) numbers on the power (Po) and Nusselt (Nu) numbers for the mentioned stirrer type have been studied. Also, the velocity and thermal fields, the dissipation function and the apparent viscosity have been presented in different (r-z) and (r-θ) planes.Keywords: Bingham fluid, Hydrodynamic and thermal behavior, helical ribbon, mixing, numerical modelling
Procedia PDF Downloads 303392 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability
Authors: Akshay B. Pawar, Rohit Y. Parasnis
Abstract:
Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot
Procedia PDF Downloads 322391 Biosensor: An Approach towards Sustainable Environment
Authors: Purnima Dhall, Rita Kumar
Abstract:
Introduction: River Yamuna, in the national capital territory (NCT), and also the primary source of drinking water for the city. Delhi discharges about 3,684 MLD of sewage through its 18 drains in to the Yamuna. Water quality monitoring is an important aspect of water management concerning to the pollution control. Public concern and legislation are now a day’s demanding better environmental control. Conventional method for estimating BOD5 has various drawbacks as they are expensive, time-consuming, and require the use of highly trained personnel. Stringent forthcoming regulations on the wastewater have necessitated the urge to develop analytical system, which contribute to greater process efficiency. Biosensors offer the possibility of real time analysis. Methodology: In the present study, a novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. Using the developed method, the BOD of a sample can be determined within 2 hours as compared to 3-5 days with the standard BOD3-5day assay. Moreover, the test is based on specified consortia instead of undefined seeding material therefore it minimizes the variability among the results. The device is coupled to software which automatically calculates the dilution required, so, the prior dilution of the sample is not required before BOD estimation. The developed BOD-Biosensor makes use of immobilized microorganisms to sense the biochemical oxygen demand of industrial wastewaters having low–moderate–high biodegradability. The method is quick, robust, online and less time consuming. Findings: The results of extensive testing of the developed biosensor on drains demonstrate that the BOD values obtained by the device correlated with conventional BOD values the observed R2 value was 0.995. The reproducibility of the measurements with the BOD biosensor was within a percentage deviation of ±10%. Advantages of developed BOD biosensor • Determines the water pollution quickly in 2 hours of time; • Determines the water pollution of all types of waste water; • Has prolonged shelf life of more than 400 days; • Enhanced repeatability and reproducibility values; • Elimination of COD estimation. Distinctiveness of Technology: • Bio-component: can determine BOD load of all types of waste water; • Immobilization: increased shelf life > 400 days, extended stability and viability; • Software: Reduces manual errors, reduction in estimation time. Conclusion: BiosensorBOD can be used to measure the BOD value of the real wastewater samples. The BOD biosensor showed good reproducibility in the results. This technology is useful in deciding treatment strategies well ahead and so facilitating discharge of properly treated water to common water bodies. The developed technology has been transferred to M/s Forbes Marshall Pvt Ltd, Pune.Keywords: biosensor, biochemical oxygen demand, immobilized, monitoring, Yamuna
Procedia PDF Downloads 278390 On Crack Tip Stress Field in Pseudo-Elastic Shape Memory Alloys
Authors: Gulcan Ozerim, Gunay Anlas
Abstract:
In shape memory alloys, upon loading, stress increases around crack tip and a martensitic phase transformation occurs in early stages. In many studies the stress distribution in the vicinity of the crack tip is represented by using linear elastic fracture mechanics (LEFM) although the pseudo-elastic behavior results in a nonlinear stress-strain relation. In this study, the HRR singularity (Hutchinson, Rice and Rosengren), that uses Rice’s path independent J-integral, is tried to formulate the stress distribution around the crack tip. In HRR approach, the Ramberg-Osgood model for the stress-strain relation of power-law hardening materials is used to represent the elastic-plastic behavior. Although it is recoverable, the inelastic portion of the deformation in martensitic transformation (up to the end of transformation) resembles to that of plastic deformation. To determine the constants of the Ramberg-Osgood equation, the material’s response is simulated in ABAQUS using a UMAT based on ZM (Zaki-Moumni) thermo-mechanically coupled model, and the stress-strain curve of the material is plotted. An edge cracked shape memory alloy (Nitinol) plate is loaded quasi-statically under mode I and modeled using ABAQUS; the opening stress values ahead of the cracked tip are calculated. The stresses are also evaluated using the asymptotic equations of both LEFM and HRR. The results show that in the transformation zone around the crack tip, the stress values are much better represented when the HRR singularity is used although the J-integral does not show path independent behavior. For the nodes very close to the crack tip, the HRR singularity is not valid due to the non-proportional loading effect and high-stress values that go beyond the transformation finish stress.Keywords: crack, HRR singularity, shape memory alloys, stress distribution
Procedia PDF Downloads 323