Search results for: interval computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1357

Search results for: interval computation

67 Accuracy of Fitbit Charge 4 for Measuring Heart Rate in Parkinson’s Patients During Intense Exercise

Authors: Giulia Colonna, Jocelyn Hoye, Bart de Laat, Gelsina Stanley, Jose Key, Alaaddin Ibrahimy, Sule Tinaz, Evan D. Morris

Abstract:

Parkinson’s disease (PD) is the second most common neurodegenerative disease and affects approximately 1% of the world’s population. Increasing evidence suggests that aerobic physical exercise can be beneficial in mitigating both motor and non-motor symptoms of the disease. In a recent pilot study of the role of exercise on PD, we sought to confirm exercise intensity by monitoring heart rate (HR). For this purpose, we asked participants to wear a chest strap heart rate monitor (Polar Electro Oy, Kempele). The device sometimes proved uncomfortable. Looking forward to larger clinical trials, it would be convenient to employ a more comfortable and user friendly device. The Fitbit Charge 4 (Fitbit Inc) is a potentially comfortable, user-friendly solution since it is a wrist-worn heart rate monitor. Polar H10 has been used in large trials, and for our purposes, we treated it as the gold standard for the beat-to-beat period (R-R interval) assessment. In previous literature, it has been shown that Fitbit Charge 4 has comparable accuracy to Polar H10 in healthy subjects. It has yet to be determined if the Fitbit is as accurate as the Polar H10 in subjects with PD or in clinical populations, generally. Goal: To compare the Fitbit Charge 4 to the Polar H10 for monitoring HR in PD subjects engaging in an intensive exercise program. Methods: A total of 596 exercise sessions from 11 subjects (6 males) were collected simultaneously by both devices. Subjects with early-stage PD (Hoehn & Yahr <=2) were enrolled in a 6 months exercise training program designed for PD patients. Subjects participated in 3 one-hour exercise sessions per week. They wore both Fitbit and Polar H10 during each session. Sessions included rest, warm-up, intensive exercise, and cool-down periods. We calculated the bias in the HR via Fitbit under rest (5min) and intensive exercise (20min) by comparing the mean HR during each of the periods to the respective means measured by the Polar (HRFitbit – HRPolar). We also measured the sensitivity and specificity of Fitbit for detecting HRs that exceed the threshold for intensive exercise, defined as 70% of an individual’s theoretical maximum HR. Different types of correlation between the two devices were investigated. Results: The mean bias was 1.68 bpm at rest and 6.29 bpm during high intensity exercise, with an overestimation by Fitbit in both conditions. The mean bias of Fitbit across both rest and intensive exercise periods was 3.98 bpm. The sensitivity of the device in identifying high intensity exercise sessions was 97.14 %. The correlation between the two devices was non-linear, suggesting a saturation tendency of Fitbit to saturate at high values of HR. Conclusion: The performance of Fitbit Charge 4 is comparable to Polar H10 for assessing exercise intensity in a cohort of PD subjects. The device should be considered a reasonable replacement for the more cumbersome chest strap technology in future similar studies of clinical populations.

Keywords: fitbit, heart rate measurements, parkinson’s disease, wrist-wearable devices

Procedia PDF Downloads 82
66 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 384
65 Psychophysiological Synchronization between the Manager and the Subordinate during a Performance Review Discussion

Authors: Mikko Salminen, Niklas Ravaja

Abstract:

Previous studies have shown that emotional intelligence (EI) has an important role in leadership and social interaction. On the other hand, physiological synchronization between two interacting participants has been related to, for example, intensity of the interaction, and interestingly also to empathy. It is suggested that the amount of covariation in physiological signals between the two interacting persons would also be related to how the discussion is perceived subjectively. To study the interrelations between physiological synchronization, emotional intelligence, and subjective perception of the interaction, performance review discussions between real manager – subordinate dyads were studied using psychophysiological measurements and self-reports. The participants consisted of 40 managers, of which 24 were female, and 78 of their subordinates, of which 45 were female. The participants worked in various fields, for example banking, education, and engineering. The managers had a normal performance review discussion with two subordinates, except two managers who, due to scheduling issues, had discussion with only one subordinate. The managers were on average 44.5 years old, and the subordinates on average 45.5 years old. Written consent, in accordance with the Declaration of Helsinki, was obtained from all the participants. After the discussion, the participants filled a questionnaire assessing their emotions during the discussion. This included a self-assessment manikin (SAM) scale for the emotional valence during the discussion, with a 9-point graphical scale representing a manikin whose facial expressions ranged from smiling and happy to frowning and unhappy. In addition, the managers filled EI360, a 37-item self-report trait emotional intelligence questionnaire. The psychophysiological activity of the participants was recorded using two Varioport-B portable recording devices. Cardiac activity (ECG, electrocardiogram) was measured with two electrodes placed on the torso. Inter-beat interval (IBI, time between two successive heart beats) was calculated from the ECG signals. The facial muscle activation (EMG, electromyography) was recorded on three sites of the left side of the face: zygomaticus major (cheek muscle), orbicularis oculi (periocular muscle), and corrugator supercilii (frowning muscle). The facial-EMG signals were rectified and smoothed, and cross-coherences were calculated between members of each dyad, for all the three EMG signals, for the baseline and discussion periods. The values were natural-log transformed to normalize the distributions. Higher cross-coherence during the discussion between the manager’s and the subordinate’s zygomatic muscles was related to more positive valence self-reported emotions, F(1; 66,137) = 7,051; p=0,01. Thus, synchronized cheek muscle activation, either due to synchronous smiling or talking, was related to more positive perception of the discussion. In addition, higher IBI synchronization between the manager and the subordinate during the discussion was related to the manager’s higher self-reported emotional intelligence, F(1; 27,981)=4,58; p=0,041. That is, the EI was related to synchronous cardiac activity and possibly to similar physiological arousal levels. The results imply that the psychophysiological synchronization could be a potentially useful index in the study of social interaction and a valuable tool in the coaching of leadership skills in organizational contexts.

Keywords: emotional intelligence, leadership, psychophysiology, social interaction, synchronization

Procedia PDF Downloads 305
64 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 65
63 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers

Authors: Pawel Martynowicz

Abstract:

Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.

Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines

Procedia PDF Downloads 109
62 Determinants of Domestic Violence among Married Women Aged 15-49 Years in Sierra Leone by an Intimate Partner: A Cross-Sectional Study

Authors: Tesfaldet Mekonnen Estifanos, Chen Hui, Afewerki Weldezgi

Abstract:

Background: Intimate partner violence (hereafter IPV) is a major global public health challenge that tortures and disables women in the place where they are ought to be most secure within their own families. The fact that the family unit is commonly viewed as a private circle, violent acts towards women remains undermined. There are limited research and knowledge about the influencing factors linked to IPV in Sierra Leone. This study, therefore, estimates the prevalence rate and the predicting factors associated with IPV. Methods: Data were taken from Sierra-Leone Demographic and Health Survey (SDHS, 2013): the first in its form to incorporate information on domestic violence. Multistage cluster sampling research design was used, and information was gathered by a standard questionnaire. A total of 5185 respondents selected were interviewed, out of whom 870 were never been in union, thus excluded. To analyze the two dependent variables: experience of IPV, ‘ever’ and 'last 12 months prior to the survey', a total of 4315 (currently or formerly married) and 4029 women (currently in union) were included respectively. These dependent variables were constructed from the three forms of violence namely physical, emotional and sexual. Data analysis was applied using SPSS version 23, comprising three-step process. First, descriptive statistics were used to show the frequency distribution of both the outcome and explanatory variables. Second, bivariate analysis adopting chi-square test was applied to assess the individual relationship between the outcome and explanatory variables. Third, multivariate logistic regression analysis was undertaken using hierarchical modeling strategy to identify the influence of the explanatory variables on the outcome variables. Odds ratio (OR) and 95% confidence interval (CI) were utilized to examine the association of the variables considering p-values less than 0.05 statistically significant. Results: The prevalence of lifetime IPV among ever married women was 48.4%, while 39.8% of those currently married experienced IPV in the previous year preceding the survey. Women having 1 to 4 and more than 5 number of ever born babies were almost certain to encounter lifetime IPV. However, women who own a property, and those who referenced 3-5 reasons for which wife-beating is acceptable were less probably to experience lifetime IPV. Attesting parental violence, partner’s dominant marital behavior, and women afraid of their partner were the variables related to both experience of IPV ‘ever’ and ‘the previous year prior to the survey’. Respondents who concur that wife-beating is sensible in certain situations and occupations under the professional category had diminished chances of revealing IPV in the year prior to the data collection. Conclusion: This study indicated that factors significantly correlated with IPV in Sierra-Leone are mostly linked with husband related factors specifically, marital controlling behaviors. Addressing IPV in Sierra-Leone requires joint efforts that target men raise awareness to address controlling behavior and empower security in affiliations.

Keywords: husband behavior, married women, partner violence, Sierra Leone

Procedia PDF Downloads 117
61 Contrastive Analysis of Parameters Registered in Training Rowers and the Impact on the Olympic Performance

Authors: Gheorghe Braniste

Abstract:

The management of the training process in sports is closely related to the awareness of the close connection between performance and the morphological, functional and psychological characteristics of the athlete's body. Achieving high results in Olympic sports is influenced, on the one hand, by the genetically determined characteristics of the body and, on the other hand, by the morphological, functional and motor abilities of the athlete. Taking into account the importance of properly understanding the evolutionary specificity of athletes to assess their competitive potential, this study provides a comparative analysis of the parameters that characterize the growth and development of the level of adaptation of sweeping rowers, considering the growth interval between 12 and 20 years. The study established that, in the multi-annual training process, the bodies of the targeted athletes register significant adaptive changes while analyzing parameters of the morphological, functional, psychomotor and sports-technical spheres. As a result of the influence of physical efforts, both specific and non-specific, there is an increase in the adaptability of the body, its transfer to a much higher level of functionality within the parameters, useful and economical adaptive reactions influenced by environmental factors, be they internal or external. The research was carried out for 7 years, on a group of 28 athletes, following their evolution and recording the specific parameters of each age stage. In order to determine the level of physical, morpho-functional, psychomotor development and technical training of rowers, the screening data were applied at the State University of Physical Education and Sports in the Republic of Moldova. During the research, measurements were made on the waist, in the standing and sitting position, arm span, weight, circumference and chest perimeter, vital capacity of the lungs, with the subsequent determination of the vital index (tolerance level to oxygen deficiency in venous blood in Stange and Genchi breath-taking tests that characterize the level of oxygen saturation, absolute and relative strength of the hand and back, calculation of body mass and morphological maturity indices (Kettle index), body surface area (body gait), psychomotor tests (Romberg test), test-tepping 10 s., reaction to a moving object, visual and auditory-motor reaction, recording of technical parameters of rowing on a competitive distance of 200 m. At the end of the study it was found that highly performance is sports is to be associated on the one hand with the genetically determined characteristics of the body and, on the other hand, with favorable adaptive reactions and energy saving, as well as morphofunctional changes influenced by internal and external environmental factors. The importance of the results obtained at the end of the study was positively reflected in obtaining the maximum level of training of athletes in order to demonstrate performance in large-scale competitions and mostly in the Olympic Games.

Keywords: olympics, parameters, performance, peak

Procedia PDF Downloads 106
60 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 59
59 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar

Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh

Abstract:

Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.

Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring

Procedia PDF Downloads 162
58 Correlates of Comprehensive HIV/AIDS Knowledge and Acceptance Attitude Towards People Living with HIV/AIDS: A Cross-Sectional Study among Unmarried Young Women in Uganda

Authors: Tesfaldet Mekonnen Estifanos, Chen Hui, Afewerki Weldezgi

Abstract:

Background: Youth in general and young females in particular, remain at the center of the HIV/AIDS epidemic. Sexual risk-taking among young unmarried women is relatively high and are the most vulnerable and highly exposed to HIV/AIDS. Improvements in the status of HIV/AIDS knowledge and acceptance attitude towards people living with HIV (PLWHIV) plays a great role in averting the incidence of HIV/AIDS. Thus, the aim of the study was to explore the level and correlates of HIV/AIDS knowledge and accepting attitude toward PLWHIV. Methods: A cross-sectional study was conducted using data from the Uganda Demographic Health Survey 2016 (UDHS-2016). National level representative household surveys using a multistage cluster probability sampling method, face to face interviews with standard questionnaires were performed. Unmarried women aged 15-24 years with a sample size of 2019 were selected from the total sample of 8674 women aged 15-49 years and were analyzed using SPSS version 23. Independent variables such as age, religion, educational level, residence, and wealth index were included. Two binary outcome variables (comprehensive HIV/AIDS knowledge and acceptance attitude toward PLWHIV) were utilized. We used the chi-square test as well as multivariate regression analysis to explore correlations of explanatory variables with the outcome variables. The results were reported by odds ratios (OR) with 95% confidence interval (95% CI), taking a p-value less than 0.05 as significant. Results: Almost all (99.3%) of the unmarried women aged 15-24 years were aware of HIV/AIDS, but only 51.2% had adequate comprehensive knowledge on HIV/AIDS. Only 69.4% knew both methods: using a condom every time had sex, and having only one faithful uninfected partner can prevent HIV/AIDS transmission. About 66.6% of the unmarried women reject at least two common local misconceptions about HIV/AIDS. Moreover, an alarmingly few (20.3%) of the respondents had a positive acceptance attitude to PLWHIV. On multivariate analysis, age (20-24 years), living in urban, being educated and wealthier, were predictors of having adequate comprehensive HIV/AIDS knowledge. On the other hand, research participants with adequate comprehensive knowledge about HIV/AIDS were highly likely (OR, 1.94 95% CI, 1.52-2.46) to have a positive acceptance attitude to PLWHIV than those with inadequate knowledge. Respondents with no education, Muslim, and Pentecostal religion were emerged less likely to have a positive acceptance attitude to PLWHIV. Conclusion: This study found out the highly accepted level of awareness, but the knowledge and positive acceptance attitude are not encouraging. Thus, expanding access to comprehensive sexuality and strengthening educational campaigns on HIV/AIDS in communities, health facilities, and schools is needed with a greater focus on disadvantaged women having low educational level, poor socioeconomic status, and those residing in rural areas. Sexual risk behaviors among the most affected people - young women have also a role in the spread of HIV/AIDS. Hence, further research assessing the significant contributing factors for sexual risk-taking might have a positive impact on the fight against HIV/AIDS.

Keywords: acceptance attitude, HIV/AIDS, knowledge, unmarried women

Procedia PDF Downloads 126
57 Anticancer Potentials of Aqueous Tinospora cordifolia and Its Bioactive Polysaccharide, Arabinogalactan on Benzo(a)Pyrene Induced Pulmonary Tumorigenesis: A Study with Relevance to Blood Based Biomarkers

Authors: Vandana Mohan, Ashwani Koul

Abstract:

Aim: To evaluate the potential of Aqueous Tinospora cordifolia stem extract (Aq.Tc) and Arabinogalactan (AG) on pulmonary carcinogenesis and associated tumor markers. Background: Lung cancer is one of the most frequent malignancy with high mortality rate due to limitation of early detection resulting in low cure rates. Current research effort focuses on identifying some blood-based biomarkers like CEA, ctDNA and LDH which may have potential to detect cancer at an early stage, evaluation of therapeutic response and its recurrence. Medicinal plants and their active components have been widely investigated for their anticancer potentials. Aqueous preparation of T. Cordifolia extract is enriched in the polysaccharide fraction i.e., AG when compared with other types of extract. Moreover, reports are available of polysaccharide fraction of T. Cordifolia in in vitro lung cancer models which showed profound anti-metastatic activity against these cell lines. However, not much has been explored about its effect in in vivo lung cancer models and the underlying mechanism involved. Experimental Design: Mice were randomly segregated into six groups. Group I animals served as control. Group II animals were administered with Aq. Tc extract (200 mg/kg b.w.) p.o.on the alternate days. Group III animals were fed with AG (7.5 mg/kg b.w.) p.o. on the alternate days (thrice a week). Group IV animals were installed with Benzo(a)pyrene (50 mg/kg b.w.), i.p. twice within an interval of two weeks. Group V animals received Aq. Tc extract as in group II along with it B(a)P was installed after two weeks of Aq. Tc administration following the same protocol as for group IV. Group VI animals received AG as in group III along with it B(a)P was installed after two weeks of AG administration. Results: Administration of B(a)P to mice resulted in increased tumor incidence, multiplicity and pulmonary somatic index with concomitant increase in serum/plasma markers like CEA, ctDNA, LDH and TNF-α.Aq.Tc and AG supplementation significantly attenuated these alterations at different stages of tumorigenesis thereby showing potent anti-cancer effect in lung cancer. A pronounced decrease in serum/plasma markers were observed in animals treated with Aq.Tc as compared to those fed with AG. Also, extensive hyperproliferation of alveolar epithelium was prominent in B(a)P induced lung tumors. However, treatment of Aq.Tc and AG to lung tumor bearing mice exhibited reduced alveolar damage evident from decreased number of hyperchromatic irregular nuclei. A direct correlation between the concentration of tumor markers and the intensity of lung cancer was observed in animals bearing cancer co-treated with Aq.Tc and AG. Conclusion: These findings substantiate the chemopreventive potential of Aq.Tc and AG against lung tumorigenesis. Interestingly, Aq.Tc was found to be more effective in modulating the cancer as reflected by various observations which may be attributed to the synergism offered by various components of Aq.Tc. Further studies are in progress to understand the underlined mechanism in inhibiting lung tumorigenesis by Aq.Tc and AG.

Keywords: Arabinogalactan, Benzo(a)pyrene B(a)P, carcinoembryonic antigen (CEA), circulating tumor DNA (ctDNA), lactate dehydrogenase (LDH), Tinospora cordifolia

Procedia PDF Downloads 169
56 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 266
55 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 59
54 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 256
53 Study on Preparation and Storage of Jam Incorporating Carrots (Dacus Carrota), Banana (Musa Acuminata) and Lime (Citrus Aurantifola)

Authors: K. Premakumar, D. S. Rushani, H. N. Hettiarachchi

Abstract:

The production and consumption of preserved foods have gained much importance due to globalization, and they provide a health benefit apart from the basic nutritional functions. Therefore, a study was conducted to develop a jam incorporating carrot, banana, and lime. Considering the findings of several preliminary studies, five formulations of the jam were prepared by blending different percentages of carrot and banana including control (where the only carrot was added). The freshly prepared formulations were subjected to physicochemical and sensory analysis.Physico-Chemical parameters such as pH, TSS, titrable acidity, ascorbic acid content, total sugar and non-reducing sugar and organoleptic qualities such as colour, aroma, taste, spread ability and overall acceptability and microbial analysis (total plate count) were analyzed after formulations. Physico-Chemical Analysis of the freshly prepared Carrot –Banana Blend jam showed increasing trend in titrable acidity (from 0.8 to 0.96, as % of citric acid), TSS (from 70.05 to 67.5 0Brix), ascorbic acid content (from 0.83 to 11.465 mg/100ml), reducing sugar (from 15.64 to 20.553%) with increase in carrot pulp from 50 to 100%. pH, total sugar, and non-reducing sugar were also reduced when carrot concentration is increased. Five points hedonic scale was used to evaluate the organoleptic characters. According to Duncan's Multiple Range Test, the mean scores for all the assessed sensory characters varied significantly (p<0.05) in the freshly made carrot-banana blend jam formulations. Based on the physicochemical and sensory analysis, the most preferred carrot: banana combinations of 50:50, 100:0 and 80:20 (T1, T2, and T5) were selected for storage studies.The formulations were stored at 300 °C room temperature and 70-75% of RH for 12 weeks. The physicochemical characteristics were measured at two weeks interval during storage. The decreasing trends in pH and ascorbic acid and an increasing trend in TSS, titrable acidity, total sugar, reducing sugar and non-reducing sugar were noted with advancement of storage periods of 12 weeks. The results of the chemical analysis showed that there were significance differences (p<0.05) between the tested formulations. Sensory evaluation was done for carrot –banana blends jam after a period of 12 weeks through a panel of 16 semi-trained panelists. The sensory analysis showed that there were significant differences (p<0.05) for organoleptic characters between carrot-banana blend jam formulations. The highest overall acceptability was observed in formulation with 80% carrot and 20% banana pulp. Microbiological Analysis was carried out on the day of preparation, 1 month, 2 months and 3 months after preparation. No bacterial growth was observed in the freshly made carrot -banana blend jam. There were no counts of yeast and moulds and coliforms in all treatments after the heat treatments and during the storage period. Only the bacterial counts (Total Plate Counts) were observed after three months of storage below the critical level, and all formulations were microbiologically safe for consumption. Based on the results of physio-chemical characteristics, sensory attributes, and microbial test, the carrot –banana blend jam with 80% carrot and 20% banana (T2) was selected as best formulation and could be stored up to 12 weeks without any significant changes in the quality characteristics.

Keywords: formulations, physicochemical parameters, microbiological analysis, sensory evaluation

Procedia PDF Downloads 193
52 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 158
51 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 144
50 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 132
49 Dietary Intakes and Associated Demographic, Behavioural and Other Health-Related Factors in Mexican College Students

Authors: Laura E. Hall, Joel Monárrez-Espino, Luz María Tejada Tayabas

Abstract:

College students are at risk of weight gain and poor dietary habits, and health behaviours established during this period have been shown to track into midlife. They may therefore be an important target group for health promotion strategies, yet there is a lack of literature regarding dietary intakes and associated factors in this group, particularly in middle-income countries such as Mexico. The aim of this exploratory research was to describe and compare reported dietary intakes among nursing and nutrition college students at two public universities in Mexico, and to explore the relationship between demographic, behavioural and other health-related factors and the risk of low diet quality. Mexican college students (n=444) majoring in nutrition or nursing at two urban universities completed questionnaires regarding dietary and health-related behaviours and risks. Dietary intake was assessed via 24-hour recall. Weight, height and abdominal circumference were measured. Descriptive statistics were reported and nutrient intakes were compared between colleges and study tracks using Student’s t tests, odds ratios and Pearson chi square tests. Two dietary quality scores were constructed to explore the relationship between demographic, behavioural and other health-related factors and the diet quality scores using binary logistic regression. Analysis was performed using SPSS statistics, with differences considered statistically significant at p<0.05. The response rate to the survey was 91%. When macronutrients were considered as a percentage of total energy, the majority of students had protein intakes within recommended ranges, however one quarter of students had carbohydrate and fat intakes exceeding recommended levels. Three quarters had fibre intakes that were below recommendations. More than half of the students reported intakes of magnesium, zinc, vitamin A, folate and vitamin E that were below estimated average requirements. Students studying nutrition reported macronutrient and micronutrient intakes that were more compliant with recommendations compared to nursing students, and students studying in central-north Mexico were more compliant than those studying in southeast Mexico. Breakfast skipping (Adjusted Odds Ratio (OR) = 5.3; 95% Confidence Interval (CI) = 1.2-22.7), risk of anxiety (OR = 2.3; CI = 1.3-4.4), and university location (OR = 1.6; CI = 1.03-2.6) were associated with a greater risk of having a low macronutrient score. Caloric intakes <1800kcal (OR = 5.8; CI = 3.5-9.7), breakfast skipping (OR = 3.7; CI = 1.4-10.3), vigorous exercise ≤1h/week (OR = 2.6; CI = 1.3-5.2), soda consumption >250mls/day (OR = 2.0; CI = 1.2-3.3), unhealthy diet perception (OR = 1.9; CI = 1.2-3.0), and university location (OR = 1.8; CI = 1.1-2.8) were significantly associated with greater odds of having a low micronutrient score. College students studying nursing and nutrition did not report ideal diets, and these students should not be overlooked in public health interventions. Differences in dietary intakes between universities and study tracks were evident, with more favourable profiles evident in nutrition compared to nursing, and North-central compared to Southeast students. Further, demographic, behavioural and other health-related factors were associated with diet quality scores, warranting further research.

Keywords: college student, diet quality, nutrient intake, young adult

Procedia PDF Downloads 437
48 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 335
47 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles

Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster

Abstract:

Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.

Keywords: clay, enzyme, polyelectrolyte, formulation

Procedia PDF Downloads 250
46 Efficacy of Sparganium stoloniferum–Derived Compound in the Treatment of Acne Vulgaris: A Pilot Study

Authors: Wanvipa Thongborisute, Punyaphat Sirithanabadeekul, Pichit Suvanprakorn, Anan Jiraviroon

Abstract:

Background: Acne vulgaris is one of the most common dermatologic problems, and can have a significant psychological and physical effect on patients. Propionibacterium acnes' roles in acne vulgaris involve the activation of toll-like receptor 4 (TLR4), and toll-like receptor 2 (TLR2) pathways. By activating these pathways, inflammatory events of acne lesions, comedogenesis and sebaceous lipogenesis can occur. Currently, there are several topical agents commonly use in treating acne vulgaris that are known to have an effect on TLRs, such as retinoic acid and adapalene, but these drugs still have some irritating effects. At present, there is an alarming increase in rate of bacterial resistance due to irrational used of antibiotics both orally and topically. For this reason, acne treatments should contain bioactive molecules targeting at the site of action for the most effective therapeutic effect with the least side effects. Sparganium stoloniferumis a Chinese aquatic herb containing a compound called Sparstolonin B (SsnB), which has been reported to selectively blocks Toll-like receptor 2 (TLR2) and Toll-like receptor 4 (TLR4)-mediated inflammatory signals. Therefore, this topical TLR2 and TLR4 antagonist, in a form of Sparganium stoloniferum-derived compound containing SsnB, should give a benefit in reducing inflammation of acne vulgaris lesions and providing an alternative treatments for patients with this condition. Materials and Methods: The objectives of this randomized double blinded split faced placebo controlled trial is to study the safety and efficacy of the Sparganium stoloniferum-derived compound. 32 volunteered patients with mild to moderate degree of acne vulgaris according to global acne grading system were included in the study. After being informed and consented the subjects were given 2 topical treatments for acne vulgaris, one being topical 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) and the other, placebo. The subjects were asked to apply each treatment to either half of the face daily morning and night by randomization for 8 weeks, and come in for a weekly follow up. For each visit, the patients went through a procedure of lesion counting, including comedones, papules, nodules, pustules, and cystic lesions. Results: During 8 weeks of experimentation, the result shows a reduction in total lesions number between the placebo and the treatment side show statistical significance starting at week 4, where the 95% confidence interval begin to no longer overlap, and shows a trend of continuing to be further apart. The decrease in the amount of total lesions between week 0 and week 8 of the placebo side shows no statistical significant at P value >0.05. While the decrease in the amount of total lesions of acne vulgaris of the treatment side comparing between week 0 and week 8 shows statistical significant at P value <0.001. Conclusion: The data demonstrates that 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) is more effective in treating acne vulgaris comparing to topical placebo in treating acne vulgaris, by showing significant reduction in the total numbers of acne lesions. Therefore, this topical Sparganium stoloniferum extraction could become a potential alternative treatment for acne vulgaris.

Keywords: acne vulgaris, sparganium stoloniferum, sparstolonin B, toll-like receptor 2, toll-like receptor 4

Procedia PDF Downloads 166
45 A Randomized, Controlled Trial to Test Habit Formation Theory for Low Intensity Physical Exercise Promotion in Older Adults

Authors: Patrick Louie Robles, Jerry Suls, Ciaran Friel, Mark Butler, Samantha Gordon, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson

Abstract:

Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence finds increasing physical activity is positively associated with health benefits. Behavior change techniques (BCTs) have demonstrated some effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a personalized trials (N-of-1) design, delivered virtually, to evaluate the efficacy of using five BCTs in increasing low-intensity physical activity (by 2,000 steps of walking per day) in adults aged 45-75 years old. The 5 BCTs described in habit formation theory are goal setting, action planning, rehearsal, rehearsal in a consistent context, and self-monitoring. The study recruited health system employees in the target age range who had no mobility restrictions and expressed interest in increasing their daily activity by a minimum of 2,000 steps per day at least five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. Participants then engaged remotely with a clinical research coordinator to establish a “walking plan” that included a time and day interval (e.g., between 7am -8am on Monday-Friday), a location for the walk (e.g., park), and how much time the plan would need to achieve a minimum of 2,000 steps over their baseline average step count (20 minutes). All elements of the walking plan were required to remain consistent throughout the study. In the 10-week intervention phase of the study, participants received all five BCTs in a single, time-sensitive text message. The text message was delivered 30 minutes prior to the established walk time and signaled participants to begin walking when the context (i.e., day of the week, time of day) they pre-selected is encountered. Participants were asked to log both the start and conclusion of their activity session by pressing a button on the Fitbit tracker. Within 30 minutes of the planned conclusion of the activity session, participants received a text message with a link to a secure survey. Here, they noted whether they engaged in the BCTs when prompted and completed an automaticity survey to identify how “automatic” their walking behavior had become. At the end of their trial, participants received a personalized summary of their step data over time, helping them learn more about their responses to the five BCTs. Whether the use of these 5 ‘habit formation’ BCTs in combination elicits a change in physical activity behavior among older adults will be reported. This study will inform the feasibility of a virtually-delivered N-of-1 study design to effectively promote physical activity as a component of healthy aging.

Keywords: aging, exercise, habit, walking

Procedia PDF Downloads 120
44 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+

Authors: Serena Salvi

Abstract:

Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.

Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality

Procedia PDF Downloads 87
43 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 76
42 Investigating the Association between Escherichia Coli Infection and Breast Cancer Incidence: A Retrospective Analysis and Literature Review

Authors: Nadia Obaed, Lexi Frankel, Amalia Ardeljan, Denis Nigel, Anniki Witter, Omar Rashid

Abstract:

Breast cancer is the most common cancer among women, with a lifetime risk of one in eight of all women in the United States. Although breast cancer is prevalent throughout the world, the uneven distribution in incidence and mortality rates is shaped by the variation in population structure, environment, genetics and known lifestyle risk factors. Furthermore, the bacterial profile in healthy and cancerous breast tissue differs with a higher relative abundance of bacteria capable of causing DNA damage in breast cancer patients. Previous bacterial infections may change the composition of the microbiome and partially account for the environmental factors promoting breast cancer. One study found that higher amounts of Staphylococcus, Bacillus, and Enterobacteriaceae, of which Escherichia coli (E. coli) is a part, were present in breast tumor tissue. Based on E. coli’s ability to damage DNA, it is hypothesized that there is an increased risk of breast cancer associated with previous E. coli infection. Therefore, the purpose of this study was to evaluate the correlation between E. coli infection and the incidence of breast cancer. Holy Cross Health, Fort Lauderdale, provided access to the Health Insurance Portability and Accountability (HIPAA) compliant national database for the purpose of academic research. International Classification of Disease 9th and 10th Codes (ICD-9, ICD-10) was then used to conduct a retrospective analysis using data from January 2010 to December 2019. All breast cancer diagnoses and all patients infected versus not infected with E. coli that underwent typical E. coli treatment were investigated. The obtained data were matched for age, Charlson Comorbidity Score (CCI score), and antibiotic treatment. Standard statistical methods were applied to determine statistical significance and an odds ratio was used to estimate the relative risk. A total of 81286 patients were identified and analyzed from the initial query and then reduced to 31894 antibiotic-specific treated patients in both the infected and control group, respectively. The incidence of breast cancer was 2.51% and present in 2043 patients in the E. coli group compared to 5.996% and present in 4874 patients in the control group. The incidence of breast cancer was 3.84% and present in 1223 patients in the treated E. coli group compared to 6.38% and present in 2034 patients in the treated control group. The decreased incidence of breast cancer in the E. coli and treated E. coli groups was statistically significant with a p-value of 2.2x10-16 and 2.264x10-16, respectively. The odds ratio in the E. coli and treated E. coli groups was 0.784 and 0.787 with a 95% confidence interval, respectively (0.756-0.813; 0.743-0.833). The current study shows a statistically significant decrease in breast cancer incidence in association with previous Escherichia coli infection. Researching the relationship between single bacterial species is important as only up to 10% of breast cancer risk is attributable to genetics, while the contribution of environmental factors including previous infections potentially accounts for a majority of the preventable risk. Further evaluation is recommended to assess the potential and mechanism of E. coli in decreasing the risk of breast cancer.

Keywords: breast cancer, escherichia coli, incidence, infection, microbiome, risk

Procedia PDF Downloads 237
41 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker

Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya

Abstract:

The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.

Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic

Procedia PDF Downloads 78
40 The Role of Serum Fructosamine as a Monitoring Tool in Gestational Diabetes Mellitus Treatment in Vietnam

Authors: Truong H. Le, Ngoc M. To, Quang N. Tran, Luu T. Cao, Chi V. Le

Abstract:

Introduction: In Vietnam, the current monitoring and treatment for ordinary diabetic patient mostly based on glucose monitoring with HbA1c test for every three months (recommended goal is HbA1c < 6.5%~7%). For diabetes in pregnant women or Gestational diabetes mellitus (GDM), glycemic control until the time of delivery is extremly important because it could reduce significantly medical implications for both the mother and the child. Besides, GDM requires continuos glucose monitoring at least every two weeks and therefore an alternative marker of glycemia for short-term control is considering a potential tool for the healthcare providers. There are published studies have indicated that the glycosylated serum protein is a better indicator than glycosylated hemoglobin in GDM monitoring. Based on the actual practice in Vietnam, this study was designed to evaluate the role of serum fructosamine as a monitoring tool in GDM treament and its correlations with fasting blood glucose (G0), 2-hour postprandial glucose (G2) and glycosylated hemoglobin (HbA1c). Methods: A cohort study on pregnant women diagnosed with GDM by the 75-gram oralglucose tolerance test was conducted at Endocrinology Department, Cho Ray hospital, Vietnam from June 2014 to March 2015. Cho Ray hospital is the final destination for GDM patient in the southern of Vietnam, the study population has many sources from other pronvinces and therefore researchers belive that this demographic characteristic can help to provide the study result as a reflection for the whole area. In this study, diabetic patients received a continuos glucose monitoring method which consists of bi-weekly on-site visit every 2 weeks with glycosylated serum protein test, fasting blood glucose test and 2-hour postprandial glucose test; HbA1c test for every 3 months; and nutritious consultance for daily diet program. The subjects still received routine treatment at the hospital, with tight follow-up from their healthcare providers. Researchers recorded bi-weekly health conditions, serum fructosamine level and delivery outcome from the pregnant women, using Stata 13 programme for the analysis. Results: A total of 500 pregnant women was enrolled and follow-up in this study. Serum fructosamine level was found to have a light correlation with G0 ( r=0.3458, p < 0.001) and HbA1c ( r=0.3544, p < 0.001), and moderately correlated with G2 ( r=0.4379, p < 0.001). During study timeline, the delivery outcome of 287 women were recorded with the average age of 38.5 ± 1.5 weeks, 9% of them have macrosomia, 2.8% have premature birth before week 35th and 9.8% have premature birth before week 37th; 64.8% of cesarean section and none of them have perinatal or neonatal mortality. The study provides a reference interval of serum fructosamine for GDM patient was 112.9 ± 20.7 μmol/dL. Conclusion: The present results suggests that serum fructosamine is as effective as HbA1c as a reflection of blood glucose control in GDM patient, with a positive result in delivery outcome (0% perinatal or neonatal mortality). The reference value of serum fructosamine measurement provided a potential monitoring utility in GDM treatment for hospitals in Vietnam. Healthcare providers in Cho Ray hospital is considering to conduct more studies to test this reference as a target value in their GDM treatment and monitoring.

Keywords: gestational diabetes mellitus, monitoring tool, serum fructosamine, Vietnam

Procedia PDF Downloads 266
39 Blood Chemo-Profiling in Workers Exposed to Occupational Pyrethroid Pesticides to Identify Associated Diseases

Authors: O. O. Sufyani, M. E. Oraiby, S. A. Qumaiy, A. I. Alaamri, Z. M. Eisa, A. M. Hakami, M. A. Attafi, O. M. Alhassan, W. M. Elsideeg, E. M. Noureldin, Y. A. Hobani, Y. Q. Majrabi, I. A. Khardali, A. B. Maashi, A. A. Al Mane, A. H. Hakami, I. M. Alkhyat, A. A. Sahly, I. M. Attafi

Abstract:

According to the Food and Agriculture Organization (FAO) Pesticides Use Database, pesticide use in agriculture in Saudi Arabia has more than doubled from 4539 tons in 2009 to 10496 tons in 2019. Among pesticides, pyrethroids is commonly used in Saudi Arabia. Pesticides may increase susceptibility to a variety of diseases, particularly among pesticide workers, due to their extensive use, indiscriminate use, and long-term exposure. Therefore, analyzing blood chemo-profiles and evaluating the detected substances as biomarkers for pyrethroid pesticide exposure may assist to identify and predicting adverse effects of exposure, which may be used for both preventative and risk assessment purposes. The purpose of this study was to (a) analyze chemo-profiling by Gas Chromatography-Mass Spectrometry (GC-MS) analysis, (b) identify the most commonly detected chemicals in a time-exposure-dependent manner using a Venn diagram, and (c) identify their associated disease among pesticide workers using analyzer tools on the Comparative Toxicogenomics Database (CTD) website, (250 healthy male volunteers (20-60 years old) who deal with pesticides in the Jazan region of Saudi Arabia (exposure intervals: 1-2, 4-6, 6-8, more than 8 years) were included in the study. A questionnaire was used to collect demographic information, the duration of pesticide exposure, and the existence of chronic conditions. Blood samples were collected for biochemistry analysis and extracted by solid-phase extraction for gas chromatography-mass spectrometry (GC-MS) analysis. Biochemistry analysis reveals no significant changes in response to the exposure period; however, an inverse association between the albumin level and the exposure interval was observed. The blood chemo-profiling was differentially expressed in an exposure time-dependent manner. This analysis identified the common chemical set associated with each group and their associated significant occupational diseases. While some of these chemicals are associated with a variety of diseases, the distinguishing feature of these chemically associated disorders is their applicability for prevention measures. The most interesting finding was the identification of several chemicals; erucic acid, pelargonic acid, alpha-linolenic acid, dibutyl phthalate, diisobutyl phthalate, dodecanol, myristic Acid, pyrene, and 8,11,14-eicosatrienoic acid, associated with pneumoconiosis, asbestosis, asthma, silicosis and berylliosis. Chemical-disease association study also found that cancer, digestive system disease, nervous system disease, and metabolic disease were the most often recognized disease categories in the common chemical set. The hierarchical clustering approach was used to compare the expression patterns and exposure intervals of the chemicals found commonly. More study is needed to validate these chemicals as early markers of pyrethroid insecticide-related occupational disease, which might assist evaluate and reducing risk. The current study contributes valuable data and recommendations to public health.

Keywords: occupational, toxicology, chemo-profiling, pesticide, pyrethroid, GC-MS

Procedia PDF Downloads 83
38 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 243