Search results for: negative binomial model
351 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 149350 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data
Authors: Andrea Ghermandi
Abstract:
Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds
Procedia PDF Downloads 180349 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil
Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss
Abstract:
The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.Keywords: artisanal brewery, production management, production processes, supply chain
Procedia PDF Downloads 120348 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions
Authors: Guo Bingkun
Abstract:
With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.Keywords: urban housing, urban planning, housing prices, comparative study
Procedia PDF Downloads 50347 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 124346 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping
Authors: Emily Rowe
Abstract:
Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables
Procedia PDF Downloads 153345 Effectiveness of Imagery Compared with Exercise Training on Hip Abductor Strength and EMG Production in Healthy Adults
Authors: Majid Manawer Alenezi, Gavin Lawrence, Hans-Peter Kubis
Abstract:
Imagery training could be an important treatment for muscle function improvements in patients who are facing limitations in exercise training by pain or other adverse symptoms. However, recent studies are mostly limited to small muscle groups and are often contradictory. Moreover, a possible bilateral transfer effect of imagery training has not been examined. We, therefore, investigated the effectiveness of unilateral imagery training in comparison with exercise training on hip abductor muscle strength and EMG. Additionally, both limbs were assessed to investigate bilateral transfer effects. Healthy individuals took part in an imagery or exercise training intervention for two weeks and were assesses pre and post training. Participants (n=30), after randomization into an imagery and an exercise group, trained 5 times a week under supervision with additional self-performed training on the weekends. The training consisted of performing, or to imagine, 5 maximal isometric hip abductor contractions (= one set), repeating the set 7 times. All measurements and trainings were performed laying on the side on a dynamometer table. The imagery script combined kinesthetic and visual imagery with internal perspective for producing imagined maximal hip abduction contractions. The exercise group performed the same number of tasks but performing the maximal hip abductor contractions. Maximal hip abduction strength and EMG amplitudes were measured of right and left limbs pre- and post-training period. Additionally, handgrip strength and right shoulder abduction (Strength and EMG) were measured. Using mixed model ANOVA (strength measures) and Wilcoxen-tests (EMGs), data revealed a significant increase in hip abductor strength production in the imagery group on the trained right limb (~6%). However, this was not reported for the exercise group. Additionally, the left hip abduction strength (not used for training) did not show a main effect in strength, however, there was a significant interaction of group and time revealing that the strength increased in the imagery group while it remained constant in the exercise group. EMG recordings supported the strength findings showing significant elevation of EMG amplitudes after imagery training on right and left side, while the exercise training group did not show any changes. Moreover, measures of handgrip strength and shoulder abduction showed no effects over time and no interactions in both groups. Experiments showed that imagery training is a suitable method for effectively increasing functional parameters of larger limb muscles (strength and EMG) which were enhanced on both sides (trained and untrained) confirming a bilateral transfer effect. Indeed, exercise training did not reveal any increases in the parameters above omitting functional improvements. The healthy individuals tested might not easily achieve benefits from exercise training within the time tested. However, it is evident that imagery training is effective in increasing the central motor command towards the muscles and that the effect seems to be segmental (no increase in handgrip strength and shoulder abduction parameters) and affects both sides (trained and untrained). In conclusion, imagery training was effective in functional improvements in limb muscles and produced a bilateral transfer on strength and EMG measures.Keywords: imagery, exercise, physiotherapy, motor imagery
Procedia PDF Downloads 234344 The Efficacy of Government Strategies to Control COVID 19: Evidence from 22 High Covid Fatality Rated Countries
Authors: Imalka Wasana Rathnayaka, Rasheda Khanam, Mohammad Mafizur Rahman
Abstract:
TheCOVID-19 pandemic has created unprecedented challenges to both the health and economic states in countries around the world. This study aims to evaluate the effectiveness of governments' decisions to mitigate the risks of COVID-19 through proposing policy directions to reduce its magnitude. The study is motivated by the ongoing coronavirus outbreaks and comprehensive policy responses taken by countries to mitigate the spread of COVID-19 and reduce death rates. This study contributes to filling the knowledge by exploiting the long-term efficacy of extensive plans of governments. This study employs a Panel autoregressive distributed lag (ARDL) framework. The panels incorporate both a significant number of variables and fortnightly observations from22 countries. The dependent variables adopted in this study are the fortnightly death rates and the rates of the spread of COVID-19. Mortality rate and the rate of infection data were computed based on the number of deaths and the number of new cases per 10000 people.The explanatory variables are fortnightly values of indexes taken to investigate the efficacy of government interventions to control COVID-19. Overall government response index, Stringency index, Containment and health index, and Economic support index were selected as explanatory variables. The study relies on the Oxford COVID-19 Government Measure Tracker (OxCGRT). According to the procedures of ARDL, the study employs (i) the unit root test to check stationarity, (ii) panel cointegration, and (iii) PMG and ARDL estimation techniques. The study shows that the COVID-19 pandemic forced immediate responses from policymakers across the world to mitigate the risks of COVID-19. Of the four types of government policy interventions: (i) Stringency and (ii) Economic Support have been most effective and reveal that facilitating Stringency and financial measures has resulted in a reduction in infection and fatality rates, while (iii) Government responses are positively associated with deaths but negatively with infected cases. Even though this positive relationship is unexpected to some extent in the long run, social distancing norms of the governments have been broken by the public in some countries, and population age demographics would be a possible reason for that result. (iv) Containment and healthcare improvements reduce death rates but increase the infection rates, although the effect has been lower (in absolute value). The model implies that implementation of containment health practices without association with tracing and individual-level quarantine does not work well. The policy implication based on containment health measures must be applied together with targeted, aggressive, and rapid containment to extensively reduce the number of people infected with COVID 19. Furthermore, the results demonstrate that economic support for income and debt relief has been the key to suppressing the rate of COVID-19 infections and fatality rates.Keywords: COVID-19, infection rate, deaths rate, government response, panel data
Procedia PDF Downloads 76343 In vitro Evaluation of Immunogenic Properties of Oral Application of Rabies Virus Surface Glycoprotein Antigen Conjugated to Beta-Glucan Nanoparticles in a Mouse Model
Authors: Narges Bahmanyar, Masoud Ghorbani
Abstract:
Rabies is caused by several species of the genus Lyssavirus in the Rhabdoviridae family. The disease is deadly encephalitis transmitted from warm-blooded animals to humans, and domestic and wild carnivores play the most crucial role in its transmission. The prevalence of rabies in poor areas of developing salinities is constantly posed as a global threat to public health. According to the World Health Organization, approximately 60,000 people die yearly from rabies. Of these, 60% of deaths are related to the Middle East. Although rabies encephalitis is incurable to date, awareness of the disease and the use of vaccines is the best way to combat the disease. Although effective vaccines are available, there is a high cost involved in vaccine production and management to combat rabies. Increasing the prevalence and discovery of new strains of rabies virus requires the need for safe, effective, and as inexpensive vaccines as possible. One of the approaches considered to achieve the quality and quantity expressed through the manufacture of recombinant types of rabies vaccine. Currently, livestock rabies vaccines are used only in inactivated or live attenuated vaccines, the process of inactivation of which pays attention to considerations. The rabies virus contains a negatively polarized single-stranded RNA genome that encodes the five major structural genes (N, P, M, G, L) from '3 to '5 . Rabies virus glycoprotein G, the major antigen, can produce the virus-neutralizing antibody. N-antigen is another candidate for developing recombinant vaccines. However, because it is within the RNP complex of the virus, the possibility of genetic diversity based on different geographical locations is very high. Glycoprotein G is structurally and antigenically more protected than other genes. Protection at the level of its nucleotide sequence is about 90% and at the amino acid level is 96%. Recombinant vaccines, consisting of a pathogenic subunit, contain fragments of the protein or polysaccharide of the pathogen that have been carefully studied to determine which of these molecules elicits a stronger and more effective immune response. These vaccines minimize the risk of side effects by limiting the immune system's access to the pathogen. Such vaccines are relatively inexpensive, easy to produce, and more stable than vaccines containing viruses or whole bacteria. The problem with these vaccines is that the pathogenic subunits may elicit a weak immune response in the body or may be destroyed before they reach the immune cells, which requires nanoparticles to overcome. Suitable for use as an adjuvant. Among these, biodegradable nanoparticles with functional levels are good candidates as adjuvants for the vaccine. In this study, we intend to use beta-glucan nanoparticles as adjuvants. The surface glycoprotein of the rabies virus (G) is responsible for identifying and binding the virus to the target cell. This glycoprotein is the major protein in the structure of the virus and induces an antibody response in the host. In this study, we intend to use rabies virus surface glycoprotein conjugated with beta-glucan nanoparticles to produce vaccines.Keywords: rabies, vaccines, beta glucan, nanoprticles, adjuvant, recombinant protein
Procedia PDF Downloads 17342 Insulin Resistance in Early Postmenopausal Women Can Be Attenuated by Regular Practice of 12 Weeks of Yoga Therapy
Authors: Praveena Sinha
Abstract:
Context: Diabetes is a global public health burden, particularly affecting postmenopausal women. Insulin resistance (IR) is prevalent in this population, and it is associated with an increased risk of developing type 2 diabetes. Yoga therapy is gaining attention as a complementary intervention for diabetes due to its potential to address stress psychophysiology. This study focuses on the efficacy of a 12-week yoga practice in attenuating insulin resistance in early postmenopausal women. Research Aim: The aim of this research is to investigate the effect of a 3-month long yoga practice on insulin resistance in early postmenopausal women. Methodology: The study conducted a prospective longitudinal design with 67 women within five years of menopause. Participants were divided into two groups based on their willingness to join yoga. The Yoga group (n = 37) received routine gynecological management along with an integrated yoga module, while the Non-Yoga group (n = 30) received only routine management. Insulin resistance was measured using the homeostasis model assessment of insulin resistance (HOMA-IR) method before and after the intervention. Statistical analysis was performed using GraphPad Prism Version 5 software, with statistical significance set at P < 0.05. Findings: The results indicate a significant decrease in serum fasting insulin levels and HOMA-IR measurements in the Yoga group, although the decrease did not reach statistical significance. In contrast, the Non-Yoga group showed a significant rise in serum fasting insulin levels and HOMA-IR measurements after 3 months, suggesting a detrimental effect on insulin resistance in these postmenopausal women. Theoretical Importance: This study provides evidence that a 12-week yoga practice can attenuate the increase in insulin resistance in early postmenopausal women. It highlights the potential of yoga as a preventive measure against the early onset of insulin resistance and the development of type 2 diabetes mellitus. Regular yoga practice can be a valuable tool in addressing hormonal imbalances associated with early postmenopause, leading to a decrease in morbidity and mortality related to insulin resistance and type 2 diabetes mellitus in this population. Data Collection and Analysis Procedures: Data collection involved measuring serum fasting insulin levels and calculating HOMA-IR. Statistical analysis was performed using GraphPad Prism Version 5 software, and mean values with standard error of the mean were reported. The significance level was set at P < 0.05. Question Addressed: The study aimed to address whether a 3-month long yoga practice could attenuate insulin resistance in early postmenopausal women. Conclusion: The research findings support the efficacy of a 12-week yoga practice in attenuating insulin resistance in early postmenopausal women. Regular yoga practice has the potential to prevent the early onset of insulin resistance and the development of type 2 diabetes mellitus in this population. By addressing the hormonal imbalances associated with early post menopause, yoga could significantly decrease morbidity and mortality related to insulin resistance and type 2 diabetes mellitus in these subjects.Keywords: post menopause, insulin resistance, HOMA-IR, yoga, type 2 diabetes mellitus
Procedia PDF Downloads 68341 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 78340 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices
Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index
Procedia PDF Downloads 356339 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 231338 Revolutionizing Product Packaging: The Impact of Transparent Graded Lanes on Ketchup and Edible Oils Containers on Consumer Behavior
Authors: Saeid Asghari
Abstract:
The growing interest in sustainability and healthy lifestyles has stimulated the development of solutions that promote mindful consumption and healthier choices. One such solution is the use of transparent graded lanes in product packaging, which enables consumers to visually track their product consumption and encourages portion control. However, the extent to which this packaging affects consumer behavior, trust, and loyalty towards a product or brand, as well as the effectiveness of messaging on the graded lanes, remains unclear. The research aims to examine the impact of transparent graded lanes on consumer behavior, trust, and loyalty towards products or brands in the context of the Janbo chain supermarket in Tehran, Iran, focusing on Ketchup and edible oils containers. A representative sample of 720 respondents is selected using quota sampling based on sex, age, and financial status. The study assesses the effect of messaging on the graded lanes in enhancing consumer recall and recognition of the product at the time of purchase, increasing repeat purchases, and fostering long-term relationships with customers. Furthermore, the potential outcomes of using transparent graded lanes, including the promotion of healthy consumption habits and the reduction of food waste, are also considered. The findings and results can inform the development of effective messaging strategies for graded lanes and suggest ways to enhance consumer engagement with product packaging. Moreover, the study's outcomes can contribute to the broader discourse on sustainable consumption and healthy lifestyles, highlighting the potential role of packaging innovations in promoting these values. We used four theories (social cognitive theory, self-perception theory, nudge theory, and marketing and consumer behavior) to examine the effect of these transparent graded lanes on consumer behavior. The conceptual model integrates the use of transparent graded lanes, consumer behavior, trust and loyalty, messaging, and promotion of healthy consumption habits. The study aims to provide insights into how transparent graded lanes can promote mindful consumption, increase consumer recognition and recall of the product, and foster long-term relationships with customers. Findings suggest that the use of transparent graded lanes on Ketchup and edible oils containers can have a positive impact on consumer behavior, trust, and loyalty towards a product or brand, as well as promote mindful consumption and healthier choices. The messaging on the graded lanes is also found to be effective in promoting recall and recognition of the product at the time of purchase and encouraging repeat purchases. However, the impact of transparent graded lanes may be limited by factors such as cultural norms, personal values, and financial status. Broadly speaking, the investigation provides valuable insights into the potential benefits and challenges of using transparent graded lanes in product packaging, as well as effective strategies for promoting healthy consumption habits and building long-term relationships with customers.Keywords: packaging customer behavior, purchase, brand loyalty, healthy consumption
Procedia PDF Downloads 253337 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 19336 A Practical Methodology for Evaluating Water, Sanitation and Hygiene Education and Training Programs
Authors: Brittany E. Coff, Tommy K. K. Ngai, Laura A. S. MacDonald
Abstract:
Many organizations in the Water, Sanitation and Hygiene (WASH) sector provide education and training in order to increase the effectiveness of their WASH interventions. A key challenge for these organizations is measuring how well their education and training activities contribute to WASH improvements. It is crucial for implementers to understand the returns of their education and training activities so that they can improve and make better progress toward the desired outcomes. This paper presents information on CAWST’s development and piloting of the evaluation methodology. The Centre for Affordable Water and Sanitation Technology (CAWST) has developed a methodology for evaluating education and training activities, so that organizations can understand the effectiveness of their WASH activities and improve accordingly. CAWST developed this methodology through a series of research partnerships, followed by staged field pilots in Nepal, Peru, Ethiopia and Haiti. During the research partnerships, CAWST collaborated with universities in the UK and Canada to: review a range of available evaluation frameworks, investigate existing practices for evaluating education activities, and develop a draft methodology for evaluating education programs. The draft methodology was then piloted in three separate studies to evaluate CAWST’s, and CAWST’s partner’s, WASH education programs. Each of the pilot studies evaluated education programs in different locations, with different objectives, and at different times within the project cycles. The evaluations in Nepal and Peru were conducted in 2013 and investigated the outcomes and impacts of CAWST’s WASH education services in those countries over the past 5-10 years. In 2014, the methodology was applied to complete a rigorous evaluation of a 3-day WASH Awareness training program in Ethiopia, one year after the training had occurred. In 2015, the methodology was applied in Haiti to complete a rapid assessment of a Community Health Promotion program, which informed the development of an improved training program. After each pilot evaluation, the methodology was reviewed and improvements were made. A key concept within the methodology is that in order for training activities to lead to improved WASH practices at the community level, it is not enough for participants to acquire new knowledge and skills; they must also apply the new skills and influence the behavior of others following the training. The steps of the methodology include: development of a Theory of Change for the education program, application of the Kirkpatrick model to develop indicators, development of data collection tools, data collection, data analysis and interpretation, and use of the findings for improvement. The methodology was applied in different ways for each pilot and was found to be practical to apply and adapt to meet the needs of each case. It was useful in gathering specific information on the outcomes of the education and training activities, and in developing recommendations for program improvement. Based on the results of the pilot studies, CAWST is developing a set of support materials to enable other WASH implementers to apply the methodology. By using this methodology, more WASH organizations will be able to understand the outcomes and impacts of their training activities, leading to higher quality education programs and improved WASH outcomes.Keywords: education and training, capacity building, evaluation, water and sanitation
Procedia PDF Downloads 310335 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105334 Photo-Fenton Degradation of Organic Compounds by Iron(II)-Embedded Composites
Authors: Marius Sebastian Secula, Andreea Vajda, Benoit Cagnon, Ioan Mamaliga
Abstract:
One of the most important classes of pollutants is represented by dyes. The synthetic character and complex molecular structure make them more stable and difficult to be biodegraded in water. The treatment of wastewaters containing dyes in order to separate/degrade dyes is of major importance. Various techniques have been employed to remove and/or degrade dyes in water. Advanced oxidation processes (AOPs) are known as among the most efficient ones towards dye degradation. The aim of this work is to investigate the efficiency of a cheap Iron-impregnated activated carbon Fenton-like catalyst in order to degrade organic compounds in aqueous solutions. In the presented study an anionic dye, Indigo Carmine, is considered as a model pollutant. Various AOPs are evaluated for the degradation of Indigo Carmine to establish the effect of the prepared catalyst. It was found that the Iron(II)-embedded activated carbon composite enhances significantly the degradation process of Indigo Carmine. Using the wet impregnation procedure, 5 g of L27 AC material were contacted with Fe(II) solutions of FeSO4 precursor at a theoretical iron content in the resulted composite of 1 %. The L27 AC was impregnated for 3h at 45°C, then filtered, washed several times with water and ethanol and dried at 55 °C for 24 h. Thermogravimetric analysis, Fourier transform infrared, X-ray diffraction, and transmission electron microscopy were employed to investigate the structural, textural, and micromorphology of the catalyst. Total iron content in the obtained composites and iron leakage were determined by spectrophotometric method using phenantroline. Photo-catalytic tests were performed using an UV - Consulting Peschl Laboratory Reactor System. UV light irradiation tests were carried out to determine the performance of the prepared Iron-impregnated composite towards the degradation of Indigo Carmine in aqueous solution using different conditions (17 W UV lamps, with and without in-situ generation of O3; different concentrations of H2O2, different initial concentrations of Indigo Carmine, different values of pH, different doses of NH4-OH enhancer). The photocatalytic tests were performed after the adsorption equilibrium has been established. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. The investigated process obeys the pseudo-first order kinetics. The photo-Fenton degradation of IC was tested at different values of initial concentration. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. Acknowledgments: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: photodegradation, heterogeneous Fenton, anionic dye, carbonaceous composite, screening factorial design
Procedia PDF Downloads 257333 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens
Authors: R. Tamborrino, F. Rinaudo
Abstract:
Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities
Procedia PDF Downloads 191332 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá
Authors: Dayron Camilo Bermudez Mendoza
Abstract:
Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility
Procedia PDF Downloads 58331 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019
Authors: Rob Leslie, Taher Karimian
Abstract:
The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.Keywords: ARR 2019, blockage, culverts, methodology
Procedia PDF Downloads 363330 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method
Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro
Abstract:
Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.Keywords: whitefly, RADseq, invasive species, SNP, climate change
Procedia PDF Downloads 126329 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm
Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam
Abstract:
The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction
Procedia PDF Downloads 139328 Drivers of the Performance of Members of a Social Incubator Considering the Values of Work: A Qualitative Study with Social Entrepreneurs
Authors: Leticia Lengler, Vania Estivalete, Vivian Flores Costa, Tais De Andrade, Lisiane Fellini Faller
Abstract:
Social entrepreneurship has emerged and driven a new development perspective, and as the literature mentions, it is based on innovation, and mainly, on the creation of social value, rather than personal wealth and shareholders. In this field of study, one of the focuses of discussion refers to the distinct characteristics of the individuals responsible for socially directed initiatives, named as social entrepreneurs. To contribute to this perspective, the present study aims to identify the values related to work that guide the performance of social entrepreneurs, members of enterprises that have developed themselves within a social incubator at a federal institution of higher education in Brazil. Each person's value system is present in different facets of his life, manifesting himself in his choices and in the way he conducts the relationship with other people in society. Especially the values of work, the focus of this research, play a significant role in organizational studies, since they are considered one of the important guiding principles of the behavior of individuals in the work environment. Regarding the method of the study, a descriptive and qualitative research was carried out. In the data collection, 24 entrepreneurs, members of five different enterprises belonging to the social incubator, were interviewed. The research instrument consisted of three open questions, which could be answered with the support of a "disc of values", an artifact organized to clearly demonstrate the values of the work to the respondents. The analysis of the interviews took into account the categories defined a priori, based on the model proposed by previous authors who validated these constructs within their research contexts, contemplating the following dimensions: Self-determination and stimulation; Safety; Conformity; Universalism and benevolence; Achievement; and Power. It should be noted that, in order to provide a better understanding of the interviewees, in the "disc of values" used in the research, these dimensions were represented by the objectives that define them, being respectively: Challenge; Financial independence; Commitment; Welfare of others; Personal success; And Power. Some preliminary results show that, as guiding principles of the investigation, priority is given to work values related to Self-determination and stimulation, Conformity and Universalism and benevolence. Such findings point to the importance given by these individuals to independent thinking and acting, as well as to novelty and constant challenge. Still, they demonstrate the appreciation of commitment to their enterprise, the people who make it and the quality of their work. They also point to the relevance of the possibility of contributing to the greater social good, that is, of the search for the well-being of close people and of society, as it is implied in models of social entrepreneurship coming from literature. With a lower degree of priority, the values denominated Safety and Realization, as the financial question at work and the search for satisfaction and personal success, through the use of socially recognized skills were mentioned aspects with little emphasis by social entrepreneurs. The Power value was not considered as guiding principle of the work for the respondents.Keywords: qualitative study, social entrepreneur, social incubator, values of work
Procedia PDF Downloads 260327 Fulfillment of Models of Prenatal Care in Adolescents from Mexico and Chile
Authors: Alejandra Sierra, Gloria Valadez, Adriana Dávalos, Mirliana Ramírez
Abstract:
For years, the Pan American Health Organization/World Health Organization and other organizations have made efforts to the improve access and the quality of prenatal care as part of comprehensive programs for maternal and neonatal health, the standards of care have been renewed in order to migrate from a medical perspective to a holistic perspective. However, despite the efforts currently antenatal care models have not been verified by a scientific evaluation in order to determine their effectiveness. The teenage pregnancy is considered as a very important phenomenon since it has been strongly associated with inequalities, poverty and the lack of gender quality; therefore it is important to analyze the antenatal care that’s been given, including not only the clinical intervention but also the activities surrounding the advertising and the health education. In this study, the objective was to describe if the previously established activities (on the prenatal care models) are being performed in the care of pregnant teenagers attending prenatal care in health institutions in two cities in México and Chile during 2013. Methods: Observational and descriptive study, of a transversal cohort. 170 pregnant women (13-19 years) were included in prenatal care in two health institutions (100 women from León-Mexico and 70 from Chile-Coquimbo). Data collection: direct survey, perinatal clinical record card which was used as checklists: WHO antenatal care model WHO-2003, Official Mexican Standard NOM-007-SSA2-1993 and Personalized Service Manual on Reproductive Process- Chile Crece Contigo; for data analysis descriptive statistics were used. The project was approved by the relevant ethics committees. Results: Regarding the fulfillment of interventions focused on physical, gynecological exam, immunizations, monitoring signs and biochemical parameters in both groups was met by more than 84%; the activities of guidance and counseling pregnant teenagers in Leon compliance rates were below 50%, on the other hand, although pregnant women in Coquimbo had a higher percentage of compliance, no one reached 100%. The topics that less was oriented were: family planning, signs and symptoms of complications and labor. Conclusions: Although the coverage of the interventions indicated in the prenatal care models was high, there were still shortcomings in the fulfillment of activities to orientation, education and health promotion. Deficiencies in adherence to prenatal care guidelines could be due to different circumstances such as lack of registration or incomplete filling of medical records, lack of medical supplies or health personnel, absences of people at prenatal check-up appointments, among many others. Therefore, studies are required to evaluate the quality of prenatal care and the effectiveness of existing models, considering the role of the different actors (pregnant women, professionals and health institutions) involved in the functionality and quality of prenatal care models, in order to create strategies to design or improve the application of a complete process of promotion and prevention of maternal and child health as well as sexual and reproductive health in general.Keywords: adolescent health, health systems, maternal health, primary health care
Procedia PDF Downloads 206326 Evaluating the ‘Assembled Educator’ of a Specialized Postgraduate Engineering Course Using Activity Theory and Genre Ecologies
Authors: Simon Winberg
Abstract:
The landscape of professional postgraduate education is changing: the focus of these programmes is moving from preparing candidates for a life in academia towards a focus of training in expert knowledge and skills to support industry. This is especially pronounced in engineering disciplines where increasingly more complex products are drawing on a depth of knowledge from multiple fields. This connects strongly with the broader notion of Industry 4.0 – where technology and society are being brought together to achieve more powerful and desirable products, but products whose inner workings also are more complex than before. The changes in what we do, and how we do it, has a profound impact on what industry would like universities to provide. One such change is the increased demand for taught doctoral and Masters programmes. These programmes aim to provide skills and training for professionals, to expand their knowledge of state-of-the-art tools and technologies. This paper investigates one such course, namely a Software Defined Radio (SDR) Master’s degree course. The teaching support for this course had to be drawn from an existing pool of academics, none of who were specialists in this field. The paper focuses on the kind of educator, a ‘hybrid academic’, assembled from available academic staff and bolstered by research. The conceptual framework for this paper combines Activity Theory and Genre Ecology. Activity Theory is used to reason about learning and interactions during the course, and Genre Ecology is used to model building and sharing of technical knowledge related to using tools and artifacts. Data were obtained from meetings with students and lecturers, logs, project reports, and course evaluations. The findings show how the course, which was initially academically-oriented, metamorphosed into a tool-dominant peer-learning structure, largely supported by the sharing of technical tool-based knowledge. While the academic staff could address gaps in the participants’ fundamental knowledge of radio systems, the participants brought with them extensive specialized knowledge and tool experience which they shared with the class. This created a complicated dynamic in the class, which centered largely on engagements with technology artifacts, such as simulators, from which knowledge was built. The course was characterized by a richness of ‘epistemic objects’, which is to say objects that had knowledge-generating qualities. A significant portion of the course curriculum had to be adapted, and the learning methods changed to accommodate the dynamic interactions that occurred during classes. This paper explains the SDR Masters course in terms of conflicts and innovations in its activity system, as well as the continually hybridizing genre ecology to show how the structuring and resource-dependence of the course transformed from its initial ‘traditional’ academic structure to a more entangled arrangement over time. It is hoped that insights from this paper would benefit other educators involved in the design and teaching of similar types of specialized professional postgraduate taught programmes.Keywords: professional postgraduate education, taught masters, engineering education, software defined radio
Procedia PDF Downloads 92325 Thermal Ageing of a 316 Nb Stainless Steel: From Mechanical and Microstructural Analyses to Thermal Ageing Models for Long Time Prediction
Authors: Julien Monnier, Isabelle Mouton, Francois Buy, Adrien Michel, Sylvain Ringeval, Joel Malaplate, Caroline Toffolon, Bernard Marini, Audrey Lechartier
Abstract:
Chosen to design and assemble massive components for nuclear industry, the 316 Nb austenitic stainless steel (also called 316 Nb) suits well this function thanks to its mechanical, heat and corrosion handling properties. However, these properties might change during steel’s life due to thermal ageing causing changes within its microstructure. Our main purpose is to determine if the 316 Nb will keep its mechanical properties after an exposition to industrial temperatures (around 300 °C) during a long period of time (< 10 years). The 316 Nb is composed by different phases, which are austenite as main phase, niobium-carbides, and ferrite remaining from the ferrite to austenite transformation during the process. Our purpose is to understand thermal ageing effects on the material microstructure and properties and to submit a model predicting the evolution of 316 Nb properties as a function of temperature and time. To do so, based on Fe-Cr and 316 Nb phase diagrams, we studied the thermal ageing of 316 Nb steel alloys (1%v of ferrite) and welds (10%v of ferrite) for various temperatures (350, 400, and 450 °C) and ageing time (from 1 to 10.000 hours). Higher temperatures have been chosen to reduce thermal treatment time by exploiting a kinetic effect of temperature on 316 Nb ageing without modifying reaction mechanisms. Our results from early times of ageing show no effect on steel’s global properties linked to austenite stability, but an increase of ferrite hardness during thermal ageing has been observed. It has been shown that austenite’s crystalline structure (cfc) grants it a thermal stability, however, ferrite crystalline structure (bcc) favours iron-chromium demixion and formation of iron-rich and chromium-rich phases within ferrite. Observations of thermal ageing effects on ferrite’s microstructure were necessary to understand the changes caused by the thermal treatment. Analyses have been performed by using different techniques like Atomic Probe Tomography (APT) and Differential Scanning Calorimetry (DSC). A demixion of alloy’s elements leading to formation of iron-rich (α phase, bcc structure), chromium-rich (α’ phase, bcc structure), and nickel-rich (fcc structure) phases within the ferrite have been observed and associated to the increase of ferrite’s hardness. APT results grant information about phases’ volume fraction and composition, allowing to associate hardness measurements to the volume fractions of the different phases and to set up a way to calculate α’ and nickel-rich particles’ growth rate depending on temperature. The same methodology has been applied to DSC results, which allowed us to measure the enthalpy of α’ phase dissolution between 500 and 600_°C. To resume, we started from mechanical and macroscopic measurements and explained the results through microstructural study. The data obtained has been match to CALPHAD models’ prediction and used to improve these calculations and employ them to predict 316 Nb properties’ change during the industrial process.Keywords: stainless steel characterization, atom probe tomography APT, vickers hardness, differential scanning calorimetry DSC, thermal ageing
Procedia PDF Downloads 93324 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting
Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola
Abstract:
Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.Keywords: accelerometer, health, mobile applications, physical activity, physical performance
Procedia PDF Downloads 196323 The Effect of Extensive Mosquito Migration on Dengue Control as Revealed by Phylogeny of Dengue Vector Aedes aegypti
Authors: M. D. Nirmani, K. L. N. Perera, G. H. Galhena
Abstract:
Dengue has become one of the most important arbo-viral disease in all tropical and subtropical regions of the world. Aedes aegypti, is the principal vector of the virus, vary in both epidemiological and behavioral characteristics, which could be finely measured through DNA sequence comparison at their population level. Such knowledge in the population differences can assist in implementation of effective vector control strategies allowing to make estimates of the gene flow and adaptive genomic changes, which are important predictors of the spread of Wolbachia infection or insecticide resistance. As such, this study was undertaken to investigate the phylogenetic relationships of Ae. aegypti from Galle and Colombo, Sri Lanka, based on the ribosomal protein region which spans between two exons, in order to understand the geographical distribution of genetically distinct mosquito clades and its impact on mosquito control measures. A 320bp DNA region spanning from 681-930 bp, corresponding to the ribosomal protein, was sequenced in 62 Ae. aegypti larvae collected from Galle (N=30) and Colombo (N=32), Sri Lanka. The sequences were aligned using ClustalW and the haplotypes were determined with DnaSP 5.10. Phylogenetic relationships among haplotypes were constructed using the maximum likelihood method under Tamura 3 parameter model in MEGA 7.0.14 including three previously reported sequences of Australian (N=2) and Brazilian (N=1) Ae. aegypti. The bootstrap support was calculated using 1000 replicates and the tree was rooted using Aedes notoscriptus (GenBank accession No. KJ194101). Among all sequences, nineteen different haplotypes were found among which five haplotypes were shared between 80% of mosquitoes in the two populations. Seven haplotypes were unique to each of the population. Phylogenetic tree revealed two basal clades and a single derived clade. All observed haplotypes of the two Ae. aegypti populations were distributed in all the three clades, indicating a lack of genetic differentiation between populations. The Brazilian Ae. aegypti haplotype and one of the Australian haplotypes were grouped together with the Sri Lankan basal haplotype in the same basal clade, whereas the other Australian haplotype was found in the derived clade. Phylogram showed that Galle and Colombo Ae. aegypti populations are highly related to each other despite the large geographic distance (129 Km) indicating a substantial genetic similarity between them. This may have probably arisen from passive migration assisted by human travelling and trade through both land and water as the two areas are bordered by the sea. In addition, studied Sri Lankan mosquito populations were closely related to Australian and Brazilian samples. Probably this might have caused by shipping industry between the three countries as all of them are fully or partially enclosed by sea. For example, illegal fishing boats migrating to Australia by sea is perhaps a good mean of transportation of all life stages of mosquitoes from Sri Lanka. These findings indicate that extensive mosquito migrations occur between populations not only within the country, but also among other countries in the world which might be a main barrier to the successful vector control measures.Keywords: Aedes aegypti, dengue control, extensive mosquito migration, haplotypes, phylogeny, ribosomal protein
Procedia PDF Downloads 190322 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research
Authors: Edvard P. G. Bruun
Abstract:
One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research
Procedia PDF Downloads 235