Search results for: linear and body measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9397

Search results for: linear and body measurements

757 Nutrition Transition in Bangladesh: Multisectoral Responsiveness of Health Systems and Innovative Measures to Mobilize Resources Are Required for Preventing This Epidemic in Making

Authors: Shusmita Khan, Shams El Arifeen, Kanta Jamil

Abstract:

Background: Nutrition transition in Bangladesh has progressed across various relevant socio-demographic contextual issues. For a developing country like Bangladesh, its is believed that, overnutrition is less prevalent than undernutrition. However, recent evidence suggests that a rapid shift is taking place where overweight is subduing underweight. With this rapid increase, for Bangladesh, it will be challenging to achieve the global agenda on halting overweight and obesity. Methods: A secondary analysis was performed from six successive national demographic and health surveys to get the trend on undernutrition and overnutrition for women from reproductive age. In addition, national relevant policy papers were reviewed to determine the countries readiness for whole of the systems approach to tackle this epidemic. Results: Over the last decade, the proportion of women with low body mass index (BMI<18.5), an indicator of undernutrition, has decreased markedly from 34% to 19%. However, the proportion of overweight women (BMI ≥25) increased alarmingly from 9% to 24% over the same period. If the WHO cutoff for public health action (BMI ≥23) is used, the proportion of overweight women has increased from 17% in 2004 to 39% in 2014. The increasing rate of obesity among women is a major challenge to obstetric practice for both women and fetuses. In the long term, overweight women are also at risk of future obesity, diabetes, hyperlipidemia, hypertension, and heart disease. These diseases have serious impact on health care systems. Costs associated with overweight and obesity involves direct and indirect costs. Direct costs include preventive, diagnostic, and treatment services related to obesity. Indirect costs relate to morbidity and mortality costs including productivity. Looking at the Bangladesh Health Facility Survey, it is found that the country is bot prepared for providing nutrition-related health services, regarding prevention, screening, management and treatment. Therefore, if this nutrition transition is not addressed properly, Bangladesh will not be able to achieve the target of the NCD global monitoring framework of the WHO. Conclusion: Addressing this nutrition transition requires contending ‘malnutrition in all its forms’ and addressing it with integrated approaches. Whole of the systems action is required at all levels—starting from improving multi-sectoral coordination to scaling up nutrition-specific and nutrition-sensitive mainstreamed interventions keeping health system in mind.

Keywords: nutrition transition, Bangladesh, health system, undernutrition, overnutrition, obesity

Procedia PDF Downloads 266
756 Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance

Authors: V. Kumar, K. Jha

Abstract:

The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance.

Keywords: convergent-divergent vortex finder, cyclone separator, discrete phase modeling, Reynolds stress model

Procedia PDF Downloads 155
755 The Association of Work Stress with Job Satisfaction and Occupational Burnout in Nurse Anesthetists

Authors: I. Ling Tsai, Shu Fen Wu, Chen-Fuh Lam, Chia Yu Chen, Shu Jiuan Chen, Yen Lin Liu

Abstract:

Purpose: Following the conduction of the National Health Insurance (NHI) system in Taiwan since 1995, the demand for anesthesia services continues to increase in the operating rooms and other medical units. It has been well recognized that increased work stress not only affects the clinical performance of the medical staff, long-term work load may also result in occupational burnout. Our study aimed to determine the influence of working environment, work stress and job satisfaction on the occupational burnout in nurse anesthetists. The ultimate goal of this research project is to develop a strategy in establishing a friendly, less stressful workplace for the nurse anesthetists to enhance their job satisfaction, thereby reducing occupational burnout and increasing the career life for nurse anesthetists. Methods: This was a cross-sectional, descriptive study performed in a metropolitan teaching hospital in southern Taiwan between May 2017 to July 2017. A structured self-administered questionnaire, modified from the Practice Environment Scale of the Nursing Work Index (PES-NWI), Occupational Stress Indicator 2 (OSI-2) and Maslach Burnout Inventory (MBI) manual was collected from the nurse anesthetists. The relationships between two numeric datasets were analyzed by the Pearson correlation test (SPSS 20.0). Results: A total of 66 completed questionnaires were collected from 75 nurses (response rate 88%). The average scores for the working environment, job satisfaction, and work stress were 69.6%, 61.5%, and 63.9%, respectively. The three perspectives used to assess the occupational burnout, namely emotional exhaustion, depersonalization and sense of personal accomplishment were 26.3, 13.0 and 24.5, suggesting the presence of moderate to high degrees of burnout in our nurse anesthetists. The presence of occupational burnout was closely correlated with the unsatisfactory working environment (r=-0.385, P=0.001) and reduced job satisfaction (r=-0.430, P=0.000). Junior nurse anesthetists (<1-year clinical experience) reported having higher satisfaction in working environment than the seniors (5 to 10-year clinical experience) (P=0.02). Although the average scores for work stress, job satisfaction, and occupational burnout were lower in junior nurses, the differences were not statistically different. The linear regression model, the working environment was the independent factor that predicted occupational burnout in nurse anesthetists up to 19.8%. Conclusions: High occupational burnout is more likely to develop in senior nurse anesthetists who experienced the dissatisfied working environment, work stress and lower job satisfaction. In addition to the regulation of clinical duties, the increased workload in the supervision of the junior nurse anesthetists may result in emotional stress and burnout in senior nurse anesthetists. Therefore, appropriate adjustment of clinical and teaching loading in the senior nurse anesthetists could be helpful to improve the occupational burnout and enhance the retention rate.

Keywords: nurse anesthetists, working environment, work stress, job satisfaction, occupational burnout

Procedia PDF Downloads 263
754 The Gaze; Objectification of the Surrogate Mother in Cross-Border Surrogacy: An Empirical Study Applied to Surrogacy Facilitators

Authors: Yingyi Luo

Abstract:

Cross-border surrogacy is seen by many as a market in which women are bought and sold commodities at risk of trafficking. A surrogate can be framed as either a fully acknowledged subject, with whom intended parents engage in cross-border surrogacy—or as a tool utilized by intended parents and surrogacy facilitators in the furtherance of their own objectives. In order to identify which frame prevails, this paper applies subjectivity theory to an empirical study of cross-border surrogacy facilitated by facilitators in Australia analysing interviews with surrogate agents, counsellors and lawyers, and observations at trade show. The aim of the paper is to advance understanding of the dynamics of the relationship between intended parents, surrogates, and surrogacy facilitators by collecting new data and applying unique framework. As dominant players, surrogacy facilitators have a significant impact on determining the nature of cross-border surrogacy. However, little is known concerning the manner in which facilitators influence the inter-subjectivity between surrogate mothers and intended parents. Thus, this paper intends to identify how facilitators depict surrogate mothers, the degree to which their perspectives bear upon both the subjectivity of the surrogate mother and the relationship of intended parents with surrogate mothers. For the purpose of introducing and developing this framework in the context of cross-border surrogacy, this paper borrows from the work of theorists not often mentioned in bioethics, including Jacques Lacan, Marco Cavallaro, Michel Foucault, and others. It also applies the concept of 'the gaze' along with the dynamic of 'self' and 'other' to the cross-border surrogacy arrangement. Applying the concept of the gaze can provide a new way to interpret the power dynamic that plays out among surrogacy facilitators, intended parents, and surrogates within the commercial surrogacy arrangement and how the subjectivity is produced through the power. Viewing the relationships between the players in cross-border surrogacy through the lens of gaze theory, this paper finds that, in cross-border surrogacy, due to the structural power imbalance, affluent intended parents and surrogacy facilitators are possessors of the gaze, while surrogate mothers are under the thrall of the gaze. Specifically, facilitators frame surrogate mothers' reproductive abilities as commodities that intended parents can purchase to fulfil their urgent need to have children and experience full subjectivity, and they take a cut of the money that paid by intended parents. Therefore, commodification of the body results in degrading a surrogate mother (the object), reifying her as no more than a walking womb (the other), a process which is highly detrimental to the self of surrogate mothers. This relationship, formalized through contractual means, allows intended parents and facilitators to take advantage of surrogate mothers in the furtherance of their own objectives. This argument is enriched by new data from interviews and observations that provide nuance to this understanding of inter-subjectivity.

Keywords: cross-border surrogacy, facilitators, self, surrogate mothers

Procedia PDF Downloads 117
753 Taraxacum Officinale (Dandelion) and Its Phytochemical Approach to Malignant Diseases

Authors: Angel Champion

Abstract:

Chemotherapy and radiation use an acidified approach to induce apoptosis, which only kills mature cancer cells while resulting in gene and cell damage with significant levels of toxicity in tumor-affected tissues and organs. The acid approach, where the cells exterminated are not differentiated, induces the disappearance of white blood cells from the blood. This increases susceptibility to infection in severe forms of cancer spread. However, chemotherapy and radiation cannot kill cancer stem cells that metastasize, being the leading cause of 98% of cancer fatalities. With over 12 million new cancer cases symptomatic each year, including common malignancies such as Hepatocellular Carcinoma (HCC), this study aims to assess the bioactive constituents and phytochemical composition of Taraxacum Officinale (Dandelion). This analysis enables pharmaceutical quality and potency to be applied to studies on cancer cell proliferation and apoptosis. A phytochemical screening is carried out to identify the antioxidant components of Dandelion root, stem, and flower extract. The constituents tested for are phlorotannins, carbohydrates, glycosides, saponins, flavonoids, alkaloids, sterols, triterpenes, and anthraquinone glycosides. To conserve the existing phenolic compounds, a portion of the constituent tests will be examined with an acid, alcohol, or aqueous solvent. As a result, the qualitative and quantitative variations within the Dandelion extract that measure uniform effective potency are vital to the conformity for producing medicinal products. These medicines will be constructed with a consistent, uniform composition that physicians can use to control and effectively eradicate malignant diseases safely. Taraxacum Officinale's phytochemical composition comprises a highly-graded potency due to present bioactive contents that will essentially drive out malignant disease within the human body. Its high potency rate is powerful enough to eliminate both mature cancer cells and cancer stem cells without the cell and gene damage induced by chemotherapy and radiation. Correspondingly, the high margins of cancer mortality on a global scale are mitigated. This remarkable contribution to modern therapeutics will essentially optimize the margins of natural products and their derivatives, which account for 50% of pharmaceuticals in modern therapeutics, while preventing the adverse effects of radiation and chemotherapy drugs.

Keywords: antioxidant, apoptosis, metastasize, phytochemical, proliferation, potency

Procedia PDF Downloads 55
752 Spatial Direct Numerical Simulation of Instability Waves in Hypersonic Boundary Layers

Authors: Jayahar Sivasubramanian

Abstract:

Understanding laminar-turbulent transition process in hyper-sonic boundary layers is crucial for designing viable high speed flight vehicles. The study of transition becomes particularly important in the high speed regime due to the effect of transition on aerodynamic performance and heat transfer. However, even after many years of research, the transition process in hyper-sonic boundary layers is still not understood. This lack of understanding of the physics of the transition process is a major impediment to the development of reliable transition prediction methods. Towards this end, spatial Direct Numerical Simulations are conducted to investigate the instability waves generated by a localized disturbance in a hyper-sonic flat plate boundary layer. In order to model a natural transition scenario, the boundary layer was forced by a short duration (localized) pulse through a hole on the surface of the flat plate. The pulse disturbance developed into a three-dimensional instability wave packet which consisted of a wide range of disturbance frequencies and wave numbers. First, the linear development of the wave packet was studied by forcing the flow with low amplitude (0.001% of the free-stream velocity). The dominant waves within the resulting wave packet were identified as two-dimensional second mode disturbance waves. Hence the wall-pressure disturbance spectrum exhibited a maximum at the span wise mode number k = 0. The spectrum broadened in downstream direction and the lower frequency first mode oblique waves were also identified in the spectrum. However, the peak amplitude remained at k = 0 which shifted to lower frequencies in the downstream direction. In order to investigate the nonlinear transition regime, the flow was forced with a higher amplitude disturbance (5% of the free-stream velocity). The developing wave packet grows linearly at first before reaching the nonlinear regime. The wall pressure disturbance spectrum confirmed that the wave packet developed linearly at first. The response of the flow to the high amplitude pulse disturbance indicated the presence of a fundamental resonance mechanism. Lower amplitude secondary peaks were also identified in the disturbance wave spectrum at approximately half the frequency of the high amplitude frequency band, which would be an indication of a sub-harmonic resonance mechanism. The disturbance spectrum indicates, however, that fundamental resonance is much stronger than sub-harmonic resonance.

Keywords: boundary layer, DNS, hyper sonic flow, instability waves, wave packet

Procedia PDF Downloads 167
751 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 211
750 Geostatistical Analysis of Contamination of Soils in an Urban Area in Ghana

Authors: S. K. Appiah, E. N. Aidoo, D. Asamoah Owusu, M. W. Nuonabuor

Abstract:

Urbanization remains one of the unique predominant factors which is linked to the destruction of urban environment and its associated cases of soil contamination by heavy metals through the natural and anthropogenic activities. These activities are important sources of toxic heavy metals such as arsenic (As), cadmium (Cd), chromium (Cr), copper (Cu), iron (Fe), manganese (Mn), and lead (Pb), nickel (Ni) and zinc (Zn). Often, these heavy metals lead to increased levels in some areas due to the impact of atmospheric deposition caused by their proximity to industrial plants or the indiscriminately burning of substances. Information gathered on potentially hazardous levels of these heavy metals in soils leads to establish serious health and urban agriculture implications. However, characterization of spatial variations of soil contamination by heavy metals in Ghana is limited. Kumasi is a Metropolitan city in Ghana, West Africa and is challenged with the recent spate of deteriorating soil quality due to rapid economic development and other human activities such as “Galamsey”, illegal mining operations within the metropolis. The paper seeks to use both univariate and multivariate geostatistical techniques to assess the spatial distribution of heavy metals in soils and the potential risk associated with ingestion of sources of soil contamination in the Metropolis. Geostatistical tools have the ability to detect changes in correlation structure and how a good knowledge of the study area can help to explain the different scales of variation detected. To achieve this task, point referenced data on heavy metals measured from topsoil samples in a previous study, were collected at various locations. Linear models of regionalisation and coregionalisation were fitted to all experimental semivariograms to describe the spatial dependence between the topsoil heavy metals at different spatial scales, which led to ordinary kriging and cokriging at unsampled locations and production of risk maps of soil contamination by these heavy metals. Results obtained from both the univariate and multivariate semivariogram models showed strong spatial dependence with range of autocorrelations ranging from 100 to 300 meters. The risk maps produced show strong spatial heterogeneity for almost all the soil heavy metals with extremely risk of contamination found close to areas with commercial and industrial activities. Hence, ongoing pollution interventions should be geared towards these highly risk areas for efficient management of soil contamination to avert further pollution in the metropolis.

Keywords: coregionalization, heavy metals, multivariate geostatistical analysis, soil contamination, spatial distribution

Procedia PDF Downloads 279
749 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers

Authors: B. Neethu, Diptesh Das

Abstract:

The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.

Keywords: bridge, semi active control, sliding mode control, MR damper

Procedia PDF Downloads 114
748 Effect on the Integrity of the DN300 Pipe and Valves in the Cooling Water System Imposed by the Pipes and Ventilation Pipes above in an Earthquake Situation

Authors: Liang Zhang, Gang Xu, Yue Wang, Chen Li, Shao Chong Zhou

Abstract:

Presently, more and more nuclear power plants are facing the issue of life extension. When a nuclear power plant applies for an extension of life, its condition needs to meet the current design standards, which is not fine for all old reactors, typically for seismic design. Seismic-grade equipment in nuclear power plants are now generally placed separately from the non-seismic-grade equipment, but it was not strictly required before. Therefore, it is very important to study whether non-seismic-grade equipment will affect the seismic-grade equipment when dropped down in an earthquake situation, which is related to the safety of nuclear power plants and future life extension applications. This research was based on the cooling water system with the seismic and non-seismic grade equipment installed together, as an example to study whether the non-seismic-grade equipment such as DN50 fire pipes and ventilation pipes arranged above will damage the DN300 pipes and valves arranged below when earthquakes occur. In the study, the simulation was carried out by ANSYS / LY-DYNA, and Johnson-Cook was used as the material model and failure model. For the experiments, the relative positions of objects in the room were restored by 1: 1. In the experiment, the pipes and valves were filled with water with a pressure of 0.785 MPa. The pressure-holding performance of the pipe was used as a criterion for damage. In addition to the pressure-holding performance, the opening torque was considered as well for the valves. The research results show that when the 10-meter-long DN50 pipe was dropped from the position of 8 meters height and the 8-meter-long air pipe dropped from a position of 3.6 meters height, they do not affect the integrity of DN300 pipe below. There is no failure phenomenon in the simulation as well. After the experiment, the pressure drop in two hours for the pipe is less than 0.1%. The main body of the valve does not fail either. The opening torque change after the experiment is less than 0.5%, but the handwheel of the valve may break, which affects the opening actions. In summary, impacts of the upper pipes and ventilation pipes dropdown on the integrity of the DN300 pipes and valves below in a cooling water system of a typical second-generation nuclear power plant under an earthquake was studied. As a result, the functionality of the DN300 pipeline and the valves themselves are not significantly affected, but the handwheel of the valve or similar articles can probably be broken and need to take care.

Keywords: cooling water system, earthquake, integrity, pipe and valve

Procedia PDF Downloads 100
747 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 159
746 Effectiveness of Imagery Compared with Exercise Training on Hip Abductor Strength and EMG Production in Healthy Adults

Authors: Majid Manawer Alenezi, Gavin Lawrence, Hans-Peter Kubis

Abstract:

Imagery training could be an important treatment for muscle function improvements in patients who are facing limitations in exercise training by pain or other adverse symptoms. However, recent studies are mostly limited to small muscle groups and are often contradictory. Moreover, a possible bilateral transfer effect of imagery training has not been examined. We, therefore, investigated the effectiveness of unilateral imagery training in comparison with exercise training on hip abductor muscle strength and EMG. Additionally, both limbs were assessed to investigate bilateral transfer effects. Healthy individuals took part in an imagery or exercise training intervention for two weeks and were assesses pre and post training. Participants (n=30), after randomization into an imagery and an exercise group, trained 5 times a week under supervision with additional self-performed training on the weekends. The training consisted of performing, or to imagine, 5 maximal isometric hip abductor contractions (= one set), repeating the set 7 times. All measurements and trainings were performed laying on the side on a dynamometer table. The imagery script combined kinesthetic and visual imagery with internal perspective for producing imagined maximal hip abduction contractions. The exercise group performed the same number of tasks but performing the maximal hip abductor contractions. Maximal hip abduction strength and EMG amplitudes were measured of right and left limbs pre- and post-training period. Additionally, handgrip strength and right shoulder abduction (Strength and EMG) were measured. Using mixed model ANOVA (strength measures) and Wilcoxen-tests (EMGs), data revealed a significant increase in hip abductor strength production in the imagery group on the trained right limb (~6%). However, this was not reported for the exercise group. Additionally, the left hip abduction strength (not used for training) did not show a main effect in strength, however, there was a significant interaction of group and time revealing that the strength increased in the imagery group while it remained constant in the exercise group. EMG recordings supported the strength findings showing significant elevation of EMG amplitudes after imagery training on right and left side, while the exercise training group did not show any changes. Moreover, measures of handgrip strength and shoulder abduction showed no effects over time and no interactions in both groups. Experiments showed that imagery training is a suitable method for effectively increasing functional parameters of larger limb muscles (strength and EMG) which were enhanced on both sides (trained and untrained) confirming a bilateral transfer effect. Indeed, exercise training did not reveal any increases in the parameters above omitting functional improvements. The healthy individuals tested might not easily achieve benefits from exercise training within the time tested. However, it is evident that imagery training is effective in increasing the central motor command towards the muscles and that the effect seems to be segmental (no increase in handgrip strength and shoulder abduction parameters) and affects both sides (trained and untrained). In conclusion, imagery training was effective in functional improvements in limb muscles and produced a bilateral transfer on strength and EMG measures.

Keywords: imagery, exercise, physiotherapy, motor imagery

Procedia PDF Downloads 209
745 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 261
744 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 359
743 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study

Authors: Samantha Meyerholz

Abstract:

MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.

Keywords: engineering, ergonomics, healthcare, lean

Procedia PDF Downloads 210
742 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 95
741 An Approach to Study the Biodegradation of Low Density Polyethylene Using Microbial Strains of Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence in Different Media Form and Salt Condition

Authors: Monu Ojha, Rahul Rana, Satywati Sharma, Kavya Dashora

Abstract:

The global production rate of plastics has increased enormously and global demand for polyethylene resins –High-density polyethylene (HDPE), Linear low-density polyethylene (LLDPE) and Low-density polyethylene (LDPE) is expected to rise drastically, with very high value. These get accumulated in the environment, posing a potential ecological threat as they are degrading at a very slow rate and remain in the environment indefinitely. The aim of the present study was to investigate the potential of commonly found soil microbes like Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence for their ability to biodegrade LDPE in the lab on solid and liquid media conditions as well as in presence of 1% salt in the soil. This study was conducted at Indian Institute of Technology, Delhi, India from July to September where average temperature and RH (Relative Humidity) were 33 degrees Celcius and 80% respectively. It revealed that the weight loss of LDPE strip obtained from market of approximately 4x6 cm dimensions is more in liquid broth media than in solid agar media. The percentage weight loss by P. fluroscence, A. niger and B. subtilus observed after 80 days of incubation was 15.52, 9.24 and 8.99% respectively in broth media and 6.93, 2.18 and 4.76 % in agar media. The LDPE strips from same source and on the same were subjected to soil in presence of above microbes with 1% salt (NaCl: obtained from commercial table salt) with temperature and RH 33 degree Celcius and 80%. It was found that the rate of degradation increased in the soil than under lab conditions. The rate of weight loss of LDPE strips under same conditions given in lab was found to be 32.98, 15.01 and17.09 % by P. fluroscence, A. niger and B. subtilus respectively. The breaking strength was found to be 9.65N, 29N and 23.85 N for P. fluroscence, A. niger and B. subtilus respectively. SEM analysis conducted on Zeiss EVO 50 confirmed that surface of LDPE becomes physically weak after biological treatment. There was the increase in the surface roughness indicating Surface erosion of LDPE film. FTIR (Fourier-transform infrared spectroscopy) analysis of the degraded LDPE films showed stretching of aldehyde group at 3334.92 and 3228.84 cm-1,, C–C=C symmetric of aromatic ring at 1639.49 cm-1.There was also C=O stretching of aldehyde group at 1735.93 cm-1. N=O peak bend was also observed which corresponds to 1365.60 cm-1, C–O stretching of ether group at 1217.08 and 1078.21 cm-1.

Keywords: microbial degradation, LDPE, Aspergillus niger, Bacillus subtilus, Peudomonas fluroscence, common salt

Procedia PDF Downloads 143
740 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 93
739 Caregivers Roles, Care Home Management, Funding and Administration in Challenged Communities: Focus in North Eastern Nigeria

Authors: Chukwuka Justus Iwegbu

Abstract:

Background: A major concern facing the world is providing senior citizens, individuals with disabilities, and other vulnerable groups with high-quality care. This issue is more serious in Nigeria's North Eastern area, where the burden of disease and disability is heavy, and access to care is constrained. This study aims to fill this gap by exploring the roles, challenges and support needs of caregivers, care home management, funding and administration in challenged communities in North Eastern Nigeria. The study will also provide a comprehensive understanding of the current situation and identify opportunities for improving the quality of care and support for caregivers and care recipients in these communities. Methods: A mixed-methods design, including both quantitative and qualitative data collection methods, will be used, and it will be guided by the stress process model of caregiving. The qualitative stage approach will comprise a survey, In-depth interviews, observations, and focus group discussion and the quantitative analysis will be used in order to comprehend the variations between caregiver's roles and care home management. A review of relevant documents, such as care home policies and funding reports, would be used to gather quantitative data on the administrative and financial aspects of care. The data collected will be analyzed using both descriptive statistics and thematic analysis. A sample size of around 200-300 participants, including caregivers, care recipients, care home managers and administrators, policymakers and health care providers, would be recruited. Findings: The study revealed that caregivers in challenged communities in North Eastern Nigeria face significant challenges, including lack of training and support, limited access to funding and resources, and high levels of burnout. Care home management and administration were also found to be inadequate, with a lack of clear policies and procedures and limited oversight and accountability. Conclusion: There is a need for increased investment in training and support for caregivers, as well as a need for improved care home management and administration in challenged communities in North Eastern Nigeria. It also highlights the importance of involving community members in decision-making and planning processes related to care homes and services. The study would contribute to the existing body of knowledge by providing a detailed understanding of the challenges faced by caregivers, care home managers and administrators.

Keywords: caregivers, care home management, funding, administration, challenge communities, North Eastern Nigeria

Procedia PDF Downloads 81
738 Modelling Optimal Control of Diabetes in the Workplace

Authors: Eunice Christabel Chukwu

Abstract:

Introduction: Diabetes is a chronic medical condition which is characterized by high levels of glucose in the blood and urine; it is usually diagnosed by means of a glucose tolerance test (GTT). Diabetes can cause a range of health problems if left unmanaged, as it can lead to serious complications. It is essential to manage the condition effectively, particularly in the workplace where the impact on work productivity can be significant. This paper discusses the modelling of optimal control of diabetes in the workplace using a control theory approach. Background: Diabetes mellitus is a condition caused by too much glucose in the blood. Insulin, a hormone produced by the pancreas, controls the blood sugar level by regulating the production and storage of glucose. In diabetes, there may be a decrease in the body’s ability to respond to insulin or a decrease in insulin produced by the pancreas which will lead to abnormalities in the metabolism of carbohydrates, proteins, and fats. In addition to the health implications, the condition can also have a significant impact on work productivity, as employees with uncontrolled diabetes are at risk of absenteeism, reduced performance, and increased healthcare costs. While several interventions are available to manage diabetes, the most effective approach is to control blood glucose levels through a combination of lifestyle modifications and medication. Methodology: The control theory approach involves modelling the dynamics of the system and designing a controller that can regulate the system to achieve optimal performance. In the case of diabetes, the system dynamics can be modelled using a mathematical model that describes the relationship between insulin, glucose, and other variables. The controller can then be designed to regulate the glucose levels to maintain them within a healthy range. Results: The modelling of optimal control of diabetes in the workplace using a control theory approach has shown promising results. The model has been able to predict the optimal dose of insulin required to maintain glucose levels within a healthy range, taking into account the individual’s lifestyle, medication regimen, and other relevant factors. The approach has also been used to design interventions that can improve diabetes management in the workplace, such as regular glucose monitoring and education programs. Conclusion: The modelling of optimal control of diabetes in the workplace using a control theory approach has significant potential to improve diabetes management and work productivity. By using a mathematical model and a controller to regulate glucose levels, the approach can help individuals with diabetes to achieve optimal health outcomes while minimizing the impact of the condition on their work performance. Further research is needed to validate the model and develop interventions that can be implemented in the workplace.

Keywords: mathematical model, blood, insulin, pancreas, model, glucose

Procedia PDF Downloads 47
737 The Use of Vasopressin in the Management of Severe Traumatic Brain Injury: A Narrative Review

Authors: Nicole Selvi Hill, Archchana Radhakrishnan

Abstract:

Introduction: Traumatic brain injury (TBI) is a leading cause of mortality among trauma patients. In the management of TBI, the main principle is avoiding cerebral ischemia, as this is a strong determiner of neurological outcomes. The use of vasoactive drugs, such as vasopressin, has an important role in maintaining cerebral perfusion pressure to prevent secondary brain injury. Current guidelines do not suggest a preferred vasoactive drug to administer in the management of TBI, and there is a paucity of information on the therapeutic potential of vasopressin following TBI. Vasopressin is also an endogenous anti-diuretic hormone (AVP), and pathways mediated by AVP play a large role in the underlying pathological processes of TBI. This creates an overlap of discussion regarding the therapeutic potential of vasopressin following TBI. Currently, its popularity lies in vasodilatory and cardiogenic shock in the intensive care setting, with increasing support for its use in haemorrhagic and septic shock. Methodology: This is a review article based on a literature review. An electronic search was conducted via PubMed, Cochrane, EMBASE, and Google Scholar. The aim was to identify clinical studies looking at the therapeutic administration of vasopressin in severe traumatic brain injury. The primary aim was to look at the neurological outcome of patients. The secondary aim was to look at surrogate markers of cerebral perfusion measurements, such as cerebral perfusion pressure, cerebral oxygenation, and cerebral blood flow. Results: Eight papers were included in the final number. Three were animal studies; five were human studies, comprised of three case reports, one retrospective review of data, and one randomised control trial. All animal studies demonstrated the benefits of vasopressors in TBI management. One animal study showed the superiority of vasopressin in reducing intracranial pressure and increasing cerebral oxygenation over a catecholaminergic vasopressor, phenylephrine. All three human case reports were supportive of vasopressin as a rescue therapy in catecholaminergic-resistant hypotension. The retrospective review found vasopressin did not increase cerebral oedema in TBI patients compared to catecholaminergic vasopressors; and demonstrated a significant reduction in the requirements of hyperosmolar therapy in patients that received vasopressin. The randomised control trial results showed no significant differences in primary and secondary outcomes between TBI patients receiving vasopressin versus those receiving catecholaminergic vasopressors. Apart from the randomised control trial, the studies included are of low-level evidence. Conclusion: Studies favour vasopressin within certain parameters of cerebral function compared to control groups. However, the neurological outcomes of patient groups are not known, and animal study results are difficult to extrapolate to humans. It cannot be said with certainty whether vasopressin’s benefits stand above usage of other vasoactive drugs due to the weaknesses of the evidence. Further randomised control trials, which are larger, standardised, and rigorous, are required to improve knowledge in this field.

Keywords: catecholamines, cerebral perfusion pressure, traumatic brain injury, vasopressin, vasopressors

Procedia PDF Downloads 54
736 Photoluminescence of Barium and Lithium Silicate Glasses and Glass Ceramics Doped with Rare Earth Ions

Authors: Augustas Vaitkevicius, Mikhail Korjik, Eugene Tretyak, Ekaterina Trusova, Gintautas Tamulaitis

Abstract:

Silicate materials are widely used as luminescent materials in amorphous and crystalline phase. Lithium silicate glass is popular for making neutron sensitive scintillation glasses. Cerium-doped single crystalline silicates of rare earth elements and yttrium have been demonstrated to be good scintillation materials. Due to their high thermal and photo-stability, silicate glass ceramics are supposed to be suitable materials for producing light converters for high power white light emitting diodes. In this report, the influence of glass composition and crystallization on photoluminescence (PL) of different silicate glasses was studied. Barium (BaO-2SiO₂) and lithium (Li₂O-2SiO₂) glasses were under study. Cerium, dysprosium, erbium and europium ions as well as their combinations were used for doping. The influence of crystallization was studied after transforming the doped glasses into glass ceramics by heat treatment in the temperature range of 550-850 degrees Celsius for 1 hour. The study was carried out by comparing the photoluminescence (PL) spectra, spatial distributions of PL parameters and quantum efficiency in the samples under study. The PL spectra and spatial distributions of their parameters were obtained by using confocal PL microscopy. A WITec Alpha300 S confocal microscope coupled with an air cooled CCD camera was used. A CW laser diode emitting at 405 nm was exploited for excitation. The spatial resolution was in sub-micrometer domain in plane and ~1 micrometer perpendicularly to the sample surface. An integrating sphere with a xenon lamp coupled with a monochromator was used to measure the external quantum efficiency. All measurements were performed at room temperature. Chromatic properties of the light emission from the glasses and glass ceramics have been evaluated. We observed that the quantum efficiency of the glass ceramics is higher than that of the corresponding glass. The investigation of spatial distributions of PL parameters revealed that heat treatment of the glasses leads to a decrease in sample homogeneity. In the case of BaO-2SiO₂: Eu, 10 micrometer long needle-like objects are formed, when transforming the glass into glass ceramics. The comparison of PL spectra from within and outside the needle-like structure reveals that the ratio between intensities of PL bands associated with Eu²⁺ and Eu³⁺ ions is larger in the bright needle-like structures. This indicates a higher degree of crystallinity in the needle-like objects. We observed that the spectral positions of the PL bands are the same in the background and the needle-like areas, indicating that heat treatment imposes no significant change to the valence state of the europium ions. The evaluation of chromatic properties confirms applicability of the glasses under study for fabrication of white light sources with high thermal stability. The ability to combine barium and lithium glass matrixes and doping by Eu, Ce, Dy, and Tb enables optimization of chromatic properties.

Keywords: glass ceramics, luminescence, phosphor, silicate

Procedia PDF Downloads 300
735 The Threats of Deforestation, Forest Fire and CO2 Emission toward Giam Siak Kecil Bukit Batu Biosphere Reserve in Riau, Indonesia

Authors: Siti Badriyah Rushayati, Resti Meilani, Rachmad Hermawan

Abstract:

A biosphere reserve is developed to create harmony amongst economic development, community development, and environmental protection, through partnership between human and nature. Giam Siak Kecil Bukit Batu Biosphere Reserve (GSKBB BR) in Riau Province, Indonesia, is unique in that it has peat soil dominating the area, many springs essential for human livelihood, high biodiversity. Furthermore, it is the only biosphere reserve covering privately managed production forest areas. The annual occurrences of deforestation and forest fire pose a threat toward such unique biosphere reserve. Forest fire produced smokes that along with mass airflow reached neighboring countries, particularly Singapore and Malaysia. In this research, we aimed at analyzing the threat of deforestation and forest fire, and the potential of CO2 emission at GSKBB BR. We used Landsat image, arcView software, and ERDAS IMAGINE 8.5 Software to conduct spatial analysis of land cover and land use changes, calculated CO2 emission based on emission potential from each land cover and land use type, and exercised simple linear regression to demonstrate the relation between CO2 emission potential and deforestation. The result showed that, beside in the buffer zone and transition area, deforestation also occurred in the core area. Spatial analysis of land cover and land use changes from years 2010, 2012, and 2014 revealed that there were changes of land cover and land use from natural forest and industrial plantation forest to other land use types, such as garden, mixed garden, settlement, paddy fields, burnt areas, and dry agricultural land. Deforestation in core area, particularly at the Giam Siak Kecil Wildlife Reserve and Bukit Batu Wildlife Reserve, occurred in the form of changes from natural forest in to garden, mixed garden, shrubs, swamp shrubs, dry agricultural land, open area, and burnt area. In the buffer zone and transition area, changes also happened, what once swamp forest changed into garden, mixed garden, open area, shrubs, swamp shrubs, and dry agricultural land. Spatial analysis on land cover and land use changes indicated that deforestation rate in the biosphere reserve from 2010 to 2014 had reached 16 119 ha/year. Beside deforestation, threat toward the biosphere reserve area also came from forest fire. The occurrence of forest fire in 2014 had burned 101 723 ha of the area, in which 9 355 ha of core area, and 92 368 ha of buffer zone and transition area. Deforestation and forest fire had increased CO2 emission as much as 24 903 855 ton/year.

Keywords: biosphere reserve, CO2 emission, deforestation, forest fire

Procedia PDF Downloads 466
734 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 38
733 Thermal Properties and Water Vapor Permeability for Cellulose-Based Materials

Authors: Stanislavs Gendelis, Maris Sinka, Andris Jakovics

Abstract:

Insulation materials made from natural sources have become more popular for the ecologisation of buildings, meaning wide use of such renewable materials. Such natural materials replace synthetic products which consume a large quantity of energy. The most common and the cheapest natural materials in Latvia are cellulose-based (wood and agricultural plants). The ecological aspects of such materials are well known, but experimental data about physical properties remains lacking. In this study, six different samples of wood wool panels and a mixture of hemp shives and lime (hempcrete) are analysed. Thermal conductivity and heat capacity measurements were carried out for wood wool and cement panels using the calibrated hot plate device. Water vapor permeability was tested for hempcrete material by using the gravimetric dry cup method. Studied wood wool panels are eco-friendly and harmless material, which is widely used in the interior design of public and residential buildings, where noise absorption and sound insulation is of importance. They are also suitable for high humidity facilities (e.g., swimming pools). The difference in panels was the width of used wood wool, which is linked to their density. The results of measured thermal conductivity are in a wide range, showing the worsening of properties with the increasing of the wool width (for the least dense 0.066, for the densest 0.091 W/(m·K)). Comparison with mineral insulation materials shows that thermal conductivity for such materials are 2-3 times higher and are comparable to plywood and fibreboard. Measured heat capacity was in a narrower range; here, the dependence on the wool width was not so strong due to the fact that heat capacity value is related to mass, not volume. The resulting heat capacity is a combination of two main components. A comparison of results for different panels allows to select the most suitable sample for a specific application because the dependencies of the thermal insulation and heat capacity properties on the wool width are not the same. Hempcrete is a much denser material compared to conventional thermal insulating materials. Therefore, its use helps to reinforce the structural capacity of the constructional framework, at the same time, it is lightweight. By altering the proportions of the ingredients, hempcrete can be produced as a structural, thermal, or moisture absorbent component. The water absorption and water vapor permeability are the most important properties of these materials. Information about absorption can be found in the literature, but there are no data about water vapor transmission properties. Water vapor permeability was tested for a sample of locally made hempcrete using different air humidity values to evaluate the possible difference. The results show only the slight influence of the air humidity on the water vapor permeability value. The absolute ‘sd value’ measured is similar to mineral wool and wood fiberboard, meaning that due to very low resistance, water vapor passes easily through the material. At the same time, other properties – structural and thermal of the hempcrete is totally different. As a result, an experimentally-based knowledge of thermal and water vapor transmission properties for cellulose-based materials was significantly improved.

Keywords: heat capacity, hemp concrete, thermal conductivity, water vapor transmission, wood wool

Procedia PDF Downloads 210
732 The Impact of a Prior Haemophilus influenzae Infection in the Incidence of Prostate Cancer

Authors: Maximiliano Guerra, Lexi Frankel, Amalia D. Ardeljan, Sarah Ghali, Diya Kohli, Omar M. Rashid.

Abstract:

Introduction/Background: Haemophilus influenzae is present as a commensal organism in the nasopharynx of most healthy adults from where it can spread to cause both systemic and respiratory tract infection. Pathogenic properties of this bacterium as well as defects in host defense may result in the spread of these bacteria throughout the body. This can result in a proinflammatory state and colonization particularly in the lungs. Recent studies have failed to determine a link between H. Influenzae colonization and prostate cancer, despite previous research demonstrating the presence of proinflammatory states in preneoplastic and neoplastic prostate lesions. Given these contradictory findings, the primary goal of this study was to evaluate the correlation between H. Influenzae infection and the incidence of prostate cancer. Methods: To evaluate the incidence of Haemophilus influenzae infection and the development of prostate cancer in the future we used data provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. We were afforded access to this database by Holy Cross Health, Fort Lauderdale for the express purpose of academic research. Standard statistical methods were employed in this study including Pearson’s chi-square tests. Results: Between January 2010 and December 2019, the query was analyzed and resulted in 13, 691 patients in both the control and C. difficile infected groups, respectively. The two groups were matched by age range and CCI score. In the Haemophilus influenzae infected group, the incidence of prostate cancer was 1.46%, while the incidence of the prostate cancer control group was 4.56%. The observed difference in cancer incidence was determined to be a statistically significant p-value (< 2.2x10^-16). This suggests that patients with a history of C. difficile have less risk of developing prostate cancer (OR 0.425, 95% CI: 0.382 - 0.472). Treatment bias was considered, the data was analyzed and resulted in two groups matched groups of 3,208 patients in both the infected with H. Influenzae treated group and the control who used the same medications for a different cause. Patients infected with H. Influenzae and treated had an incidence of prostate cancer of 2.49% whereas the control group incidence of prostate cancer was 4.92% with a p-value (< 2.2x10^-16) OR 0.455 CI 95% (0.526 -0.754), proving that the initial results were not due to the use of medications. Conclusion: The findings of our study reveal a statistically significant correlation between H. Influenzae infection and a decreased incidence of prostate cancer. Our findings suggest that prior infection with H. Influenzae may confer some degree of protection to patients and reduce their risk for developing prostate cancer. Future research is recommended to further characterize the potential role of Haemophilus influenzae in the pathogenesis of prostate cancer.

Keywords: Haemophilus Influenzae, incidence, prostate cancer, risk.

Procedia PDF Downloads 182
731 Influence of Maternal Factors on Growth Patterns of Schoolchildren in a Rural Health and Demographic Surveillance Site in South Africa: A Mixed Method Study

Authors: Perpetua Modjadji, Sphiwe Madiba

Abstract:

Background: The growth patterns of children are good nutritional indicators of their nutritional status, health, and socioeconomic level. However, the maternal factors and the belief system of the society affect the growth of children promoting undernutrition. This study determined the influence of maternal factors on growth patterns of schoolchildren in a rural site. Methods: A convergent mixed method study was conducted among 508 schoolchildren and their mothers in Dikgale Health and Demographic Surveillance System Site, South Africa. Multistage sampling was used to select schools (purposive) and learners (random), who were paired with their mothers. Anthropometry was measured and socio-demographic, obstetrical, household information, maternal influence on children’s nutrition, and growth were assessed using an interviewer administered questionnaire (quantitative). The influence of the cultural beliefs and practices of mothers on the nutrition and growth of their children was explored using focus group discussions (qualitative). Narratives of mothers were used to best understand growth patterns of schoolchildren (mixed method). Data were analyzed using STATA 14 (quantitative) and Nvivo 11 (qualitative). Quantitative and qualitative data were merged for integrated mixed method analysis using a joint display analysis. Results: Mean age of children was 10 ± 2 years, ranging from 6 to 15 years. Substantial percentages of thinness (25%), underweight (24%), and stunting (22%) were observed among the children. Mothers had a mean age of 37 ± 7 years, and 75% were overweight or obese. A depressed socio-economic status indicated by a higher rate of unemployment with no income (82.3%), and dependency on social grants (86.8%) was observed. Determinants of poor growth patterns were child’s age and gender, maternal age, height and BMI, access to water supply, and refrigerator use. The narratives of mothers suggested that the children in most of their households were exposed to poverty and the inadequate intake of quality food. Conclusion: Poor growth patterns were observed among schoolchildren while their mothers were overweight or obese. Child’s gender, school grade, maternal body mass index, and access to water were the main determinants. Congruence was observed between most qualitative themes and quantitative constructs. A need for a multi sectoral approach considering an evidence based and feasible nutrition programs for schoolchildren, especially those in rural settings and educating mothers, cannot be over-emphasized.

Keywords: growth patterns, maternal factors, rural context, schoolchildren, South Africa

Procedia PDF Downloads 147
730 A Descriptive Study on Syrian Entrepreneurs in Turkey

Authors: Rudainah Alkhazam, Özlem Yaşar Uğurlu

Abstract:

Immigrant entrepreneurship arises from the start of entrepreneurial activity by immigrants in the country they relocate to. The future prosperity and stability of the refugee-hosting countries depends on the mutual social and economic benefits between the residents and the refugees. Syrian refugees and workers in host countries necessitate efforts to assist their residents and refugees in meeting their daily needs, contributing lawfully to local and possibly regional economies through trade, and instilling hope in their future. This study investigates the effects of Syrian refugee entrepreneurs on host communities' business sectors, focusing on Turkey. Specifically, we examine entrepreneurship in general and its role in the country's economy. Because Turkey is the most popular resettlement destination for Syrian refugees, this study will shed light on the challenges of successful migrant entrepreneurship in Turkey and their role in the business sector. The research relies on a mixed-method approach which helps identify recurring themes, favorable results, and conflicting results across methods, allowing us to draw accurate conclusions. The study will adopt a quantitative method in collecting numerical data from Syrian refugees in Turkey. The self-administered survey would be translated into Arabic to ensure that the respondents understood the questions and possible replies. The research will use survey questionnaires to gather the majority of the data. These surveys would have closed-ended questions with nominal ratio and Likert scales. The data will be analyzed using linear regression and the Statistical Package for Social Sciences (SPSS) to ascertain the role of Syrian entrepreneurs in the business sectors of Turkey. The research will use the findings to make future recommendations. Syrian entrepreneurs, among the migrant entrepreneurs, contribute to the labor market, the majority of whom are young people. This research noted the significant participation of Syrian immigrant women in the entrepreneurship sector. The previous experience of Syrians in the field of trade and running their own business plays a vital role in the success of their business in the host countries. The study shows that Syrian entrepreneurs could integrate effectively into the various Turkish business sectors and could rely on themselves, open and manage their projects, and market them in the Turkish market. Syrian entrepreneurs consider that the investment and labor laws, commercial arrangements, and facilities for obtaining financial resources in Turkey need to be more flexible and available to immigrant entrepreneurs.

Keywords: entrepreneurship, immigration, Syrian, Turkey, refugees, investors, socio-economic benefits, unemployment

Procedia PDF Downloads 49
729 TiO₂ Nanoparticles Induce DNA Damage and Expression of Biomarker of Oxidative Stress on Human Spermatozoa

Authors: Elena Maria Scalisi

Abstract:

The increasing production and the use of TiO₂ nanoparticles (NPs) have inevitably led to their release into the environment, thereby posing a threat to organisms and also for human. Human exposure to TiO₂-NPs may occur during both manufacturing and use. TiO₂-NPs are common in consumer products for dermal application, toothpaste, food colorants, and nutritional supplements, then oral exposure may occur during use of such products. Into the body, TiO₂-NPs thanks to their small size (<100 nm), can, through testicular blood barrier inducing effect on testis and then on male reproductive health. The nanoscale size of TiO₂ increase the surface-to-volume ratio making them more reactive in a cell, then TiO₂ NPs increase their ability to produce reactive oxygen species (ROS). In male germ cells, ROS may have important implications in maintaining the normal functions of mature spermatozoa at physiological levels, moreover, in spermatozoa they are important signaling molecules for their hyperactivation and acrosome reaction. Nevertheless, an excess of ROS by external inputs such as NPs can increased the oxidative stress (OS), which results in damage DNA and apoptosis. The aim of our study has been investigate the impact of TiO₂ NPs on human spermatozoa, evaluating DNA damage and the expression of proteins involved in cell stress. According WHO guidelines 2021, we have exposed human spermatozoa in vitro to TiO₂ NP at concentrations 50 ppm, 100 ppm, 250 ppm, and 500 ppm for 1 hour (at 37°C and CO₂ at 5%). DNA damage was evaluated by Sperm Chromatin Dispersion Test (SCD) and TUNEL assay; moreover, we have evaluated the expression of biomarkers of oxidative stress like Heat Shock Protein 70 (HSP70) and Metallothioneins (MTs). Also, sperm parameters as motility viability have been evaluated. Our results not report a significant reduction in motility of spermatozoa at the end of the exposure. On the contrary, the progressive motility was increased at the highest concentration (500 ppm) and was statistically significant compared to control (p <0.05). Also, viability was not changed by exposure to TiO₂-NPs (p <0.05). However, increased DNA damage was observed at all concentrations, and the TUNEL assay highlighted the presence of single strand breaks in the DNA. The spermatozoa responded to the presence of TiO₂-NPs with the expression of Hsp70, which have a protective function because they allow the maintenance of cellular homeostasis in stressful/ lethal conditions. A positivity for MTs was observed mainly for the concentration of 4 mg/L. Although the biological and physiological function of the metallothionein (MTs) in the male genital organs is unclear, our results highlighted that the MTs expressed by spermatozoa maintain their biological role of detoxification from metals. Our results can give additional information to the data in the literature on the toxicity of TiO₂-NPs and reproduction.

Keywords: human spermatozoa, DNA damage, TiO₂-NPs, biomarkers

Procedia PDF Downloads 129
728 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 53