Search results for: skin panel
129 The Readaptation of the Subscale 3 of the NLit-IT (Nutrition Literacy Assessment Instrument for Italian Subjects)
Authors: Virginia Vettori, Chiara Lorini, Vieri Lastrucci, Giulia Di Pisa, Alessia De Blasi, Sara Giuggioli, Guglielmo Bonaccorsi
Abstract:
The design of the Nutrition Literacy Assessment Instrument (NLit) responds to the need to provide a tool to adequately assess the construct of nutrition literacy (NL), which is strictly connected to the quality of the diet and nutritional health status. The NLit was originally developed and validated in the US context, and it was recently validated for Italian people too (NLit-IT), involving a sample of N = 74 adults. The results of the cross-cultural adaptation of the tool confirmed its validity since it was established that the level of NL contributed to predicting the level of adherence to the Mediterranean Diet (convergent validity). Additionally, results obtained proved that Internal Consistency and reliability of the NLit-IT were good (Cronbach’s alpha (ρT) = 0.78; 95% CI, 0.69–0.84; Intraclass Correlation Coefficient (ICC) = 0.68, 95% CI, 0.46–0.85). However, the Subscale 3 of the NLit-IT “Household Food Measurement” showed lower values of ρT and ICC (ρT = 0.27; 95% CI, 0.1–0.55; ICC = 0.19, 95% CI, 0.01–0.63) than the entire instrument. Subscale 3 includes nine items which are constituted by written questions and the corresponding pictures of the meals. In particular, items 2, 3, and 8 of Subscale 3 had the lowest level of correct answers. The purpose of the present study was to identify the factors that influenced the Internal Consistency and reliability of Subscale 3 of NLit-IT using the methodology of a focus group. A panel of seven experts was formed, involving professionals in the field of public health nutrition, dietetics, and health promotion and all of them were trained on the concepts of nutrition literacy and food appearance. A member of the group drove the discussion, which was oriented in the identification of the reasons for the low levels of reliability and Internal Consistency. The members of the group discussed the level of comprehension of the items and how they could be readapted. From the discussion, it emerges that the written questions were clear and easy to understand, but it was observed that the representations of the meal needed to be improved. Firstly, it has been decided to introduce a fork or a spoon as a reference dimension to better understand the dimension of the food portion (items 1, 4 and 8). Additionally, the flat plate of items 3 and 5 should be substituted with a soup plate because, in the Italian national context, it is common to eat pasta or rice on this kind of plate. Secondly, specific measures should be considered for some kind of foods such as the brick of yogurt instead of a cup of yogurt (items 1 and 4). Lastly, it has been decided to redo the photos of the meals basing on professional photographic techniques. In conclusion, we noted that the graphical representation of the items strictly influenced the level of participants’ comprehension of the questions; moreover, the research group agreed that the level of knowledge about nutrition and food portion size is low in the general population.Keywords: nutritional literacy, cross cultural adaptation, misinformation, food design
Procedia PDF Downloads 170128 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites
Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari
Abstract:
Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm:127 Inhibition of Influenza Replication through the Restrictive Factors Modulation by CCR5 and CXCR4 Receptor Ligands
Authors: Thauane Silva, Gabrielle do Vale, Andre Ferreira, Marilda Siqueira, Thiago Moreno L. Souza, Milene D. Miranda
Abstract:
The exposure of A(H1N1)pdm09-infected epithelial cells (HeLa) to HIV-1 viral particles, or its gp120, enhanced interferon-induced transmembrane protein (IFITM3) content, a viral restriction factor (RF), resulting in a decrease in influenza replication. The gp120 binds to CCR5 (R5) or CXCR4 (X4) cell receptors during HIV-1 infection. Then, it is possible that the endogenous ligands of these receptors also modulate the expression of IFITM3 and other cellular factors that restrict influenza virus replication. Thus, the aim of this study is to analyze the role of cellular receptors R5 and X4 in modulating RFs in order to inhibit the replication of the influenza virus. A549 cells were treated with 2x effective dose (ED50) of endogenous R5 or X4 receptor agonists, CCL3 (20 ng/ml), CCL4 (10 ng/ml), CCL5 (10 ng/ml) and CXCL12 (100 ng/mL) or exogenous agonists, gp120 Bal-R5, gp120 IIIB-X4 and its mutants (5 µg/mL). The interferon α (10 ng/mL) and oseltamivir (60 nM) were used as a control. After 24 h post agonists exposure, the cells were infected with virus influenza A(H3N2) at 2 MOI (multiplicity of infection) for 1 h. Then, 24 h post infection, the supernatant was harvested and, the viral titre was evaluated by qRT-PCR. To evaluate IFITM3 and SAM and HD domain containing deoxynucleoside triphosphate triphosphohydrolase 1 (SAMHD1) protein levels, A549 were exposed to agonists for 24 h, and the monolayer was lysed with Laemmli buffer for western blot (WB) assay or fixed for indirect immunofluorescence (IFI) assay. In addition to this, we analyzed other RFs modulation in A549, after 24 h post agonists exposure by customized RT² Profiler Polymerase Chain Reaction Array. We also performed a functional assay in which SAMHD1-knocked-down, by single-stranded RNA (siRNA), A549 cells were infected with A(H3N2). In addition, the cells were treated with guanosine to assess the regulatory role of dNTPs by SAMHD1. We found that R5 and X4 agonists inhibited influenza replication in 54 ± 9%. We observed a four-fold increase in SAMHD1 transcripts by RFs mRNA quantification panel. After 24 h post agonists exposure, we did not observe an increase in IFITM3 protein levels through WB or IFI assays, but we observed an upregulation up to three-fold in the protein content of SAMHD1, in A549 exposed to agonists. Besides this, influenza replication enhanced in 20% in cell cultures that SAMDH1 was knockdown. Guanosine treatment in cells exposed to R5 ligands further inhibited influenza virus replication, suggesting that the inhibitory mechanism may involve the activation of the SAMHD1 deoxynucleotide triphosphohydrolase activity. Thus, our data show for the first time a direct relationship of SAMHD1 and inhibition of influenza replication, and provides perspectives for new studies on the signaling modulation, through cellular receptors, to induce proteins of great importance in the control of relevant infections for public health.Keywords: chemokine receptors, gp120, influenza, virus restriction factors
Procedia PDF Downloads 141126 Solid Waste and Its Impact on the Human Health
Authors: Waseem Akram, Hafiz Azhar Ali Khan
Abstract:
Unplanned urbanization together with change in life from simple to more technologically advanced style with flow of rural masses to urban areas has played a vital role in pilling loads of solid wastes in our environment. The cities and towns have expanded beyond boundaries. Even the uncontrolled population expansion has caused the overall environmental burden. Thus, today the indifference remains as one of the biggest trash that has come up due to the non-responsive behavior of the people. Everyday huge amount of solid waste is thrown in the streets, on the roads, parks, and in all those places that are frequently and often visited by the human beings. This behavior based response in many countries of the world has led to serious health concerns and environmental issues. Over 80% of our products that are sold in the market are packed in plastic bags. None of the bags are later recycled but simply become a permanent environment concern that flies, choke lines or are burnt and release toxic gases in the environment or form dumps of heaps. Lack of classification of the daily waste generated from houses and other places lead to worst clogging of the sewerage lines and formation of ponding areas which ultimately favor vector borne disease and sometimes become a cause of transmission of polio virus. Solid waste heaps were checked at different places of the cities. All of the wastes on visual assessments were classified into plastic bags, papers, broken plastic pots, clay pots, steel boxes, wrappers etc. All solid waste dumping sites in the cities and wastes that were thrown outside of the trash containers usually contained wrappers, plastic bags, and unconsumed food products. Insect populations seen in these sites included the house flies, bugs, cockroaches and mosquito larvae breeding in water filled wrappers, containers or plastic bags. The population of the mosquitoes, cockroaches and houseflies were relatively very high in dumping sites close to human population. This population has been associated with cases like dengue, malaria, dysentery, gastro and also to skin allergies during the monsoon and summer season. Thus, dumping of the huge amount of solid wastes in and near the residential areas results into serious environmental concerns, bad smell circulation, and health related issues. In some places, the same waste is burnt to get rid of mosquitoes through smoke which ultimately releases toxic material in the atmosphere. Therefore, a proper environmental strategy is needed to minimize environmental burden and promote concepts of recycled products and thus, reduce the disease burden.Keywords: solid waste accumulation, disease burden, mosquitoes, vector borne diseases
Procedia PDF Downloads 278125 To Access the Knowledge, Awareness and Factors Associated With Diabetes Mellitus in Buea, Cameroon
Authors: Franck Acho
Abstract:
This is a chronic metabolic disorder which is a fast-growing global problem with a huge social, health, and economic consequences. It is estimated that in 2010 there were globally 285 million people (approximately 6.4% of the adult population) suffering from this disease. This number is estimated to increase to 430 million in the absence of better control or cure. An ageing population and obesity are two main reasons for the increase. Diabetes mellitus is a chronic heterogeneous metabolic disorder with a complex pathogenesis. It is characterized by elevated blood glucose levels or hyperglycemia, which results from abnormalities in either insulin secretion or insulin action or both. Hyperglycemia manifests in various forms with a varied presentation and results in carbohydrate, fat, and protein metabolic dysfunctions. Long-term hyperglycemia often leads to various microvascular and macrovascular diabetic complications, which are mainly responsible for diabetes-associated morbidity and mortality. Hyperglycemia serves as the primary biomarker for the diagnosis of diabetes as well. Furthermore, it has been shown that almost 50% of the putative diabetics are not diagnosed until 10 years after onset of the disease, hence the real prevalence of global diabetes must be astronomically high. This study was conducted in a locality to access the level of knowledge, awareness and risk factors associated with people leaving with diabetes mellitus. A month before the screening was to be conducted, a health screening in some selected churches and on the local community radio as well as on relevant WhatsApp groups were advertised. A general health talk was delivered by the head of the screening unit to all attendees who were all educated on the procedure to be carried out with benefits and any possible discomforts after which the attendee’s consent was obtained. Evaluation of the participants for any leads to the diabetes selected for the screening was done by taking adequate history and physical examinations such as excessive thirst, increased urination, tiredness, hunger, unexplained weight loss, feeling irritable or having other mood changes, having blurry vision, having slow-healing sores, getting a lot of infections, such as gum, skin and vaginal infections. Out of the 94 participants the finding show that 78 were females and 16 were males, 70.21% of participants with diabetes were between the ages of 60-69yrs.The study found that only 10.63% of respondents declared a good level of knowledge of diabetes. Out of 3 symptoms of diabetes analyzed in this study, high blood sugar (58.5%) and chronic fatigue (36.17%) were the most recognized. Out of 4 diabetes risk factors analyzed in this study, obesity (21.27%) and unhealthy diet (60.63%) were the most recognized diabetes risk factors, while only 10.6% of respondents indicated tobacco use. The diabetic foot was the most recognized diabetes complication (50.57%), but some the participants indicated vision problems (30.8%),or cardiovascular diseases (20.21%) as diabetes complications.Keywords: diabetes mellitus, non comunicable disease, general health talk, hyperglycemia
Procedia PDF Downloads 56124 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison
Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison
Procedia PDF Downloads 162123 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 191122 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis
Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan
Abstract:
The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level
Procedia PDF Downloads 211121 Recent Advances in Research on Carotenoids: From Agrofood Production to Health Outcomes
Authors: Antonio J. Melendez-Martinez
Abstract:
Beyond their role as natural colorants, some carotenoids are provitamins A and may be involved in health-promoting biological actions and contribute to reducing the risk of developing non-communicable diseases, including several types of cancer, cardiovascular disease, eye conditions, skin disorders or metabolic disorders. Given the versatility of carotenoids, the COST-funded European network to advance carotenoid research and applications in agro-food and health (EUROCAROTEN) is aimed at promoting health through the diet and increasing well-being by means. Stakeholders from 38 countries participate in this network, and one of its main objectives is to promote research on little-studied carotenoids. In this contribution, recent advances of our research group and collaborators in the study of two such understudied carotenoids, namely phytoene and phytofluene, the colorless carotenoids, are outlined. The study of these carotenoids is important as they have been largely neglected despite they are present in our diets, fluids, and tissues, and evidence is accumulating that they may be involved in health-promoting actions. More specifically, studies on their levels in diverse tomato and orange varieties were carried out as well as on their potential bioavailability from different dietary sources. Furthermore, the potential effect of these carotenoids on an animal model subjected to oxidative stress was evaluated. The tomatoes were grown in research greenhouses, and some of them were subjected to regulated deficit irrigation, a sustainable agronomic practice. The citrus samples were obtained from an experimental field. The levels of carotenoids were assessed using HPLC according to routine methodologies followed in our lab. Regarding the potential bioavailability (bioaccessibility) studies, different products containing colorless carotenoids, like fruits, juices, were subjected to simulated in vitro digestions, and their incorporation into mixed micelles was assessed. The effect of the carotenoids on oxidative stress was evaluated on the Caenorhabditis elegans model. For that purpose, the worms were subjected to oxidative stress by means of a hydrogen peroxide challenge. In relation to the presence of colorless carotenoids in tomatoes and orange varieties, it was observed that they are widespread in such products and that there are mutants with very high quantities of them, for instance, the Cara Cara or Pinalate mutant oranges. The studies on their bioaccessibility revealed that, in general, phytoene and phytofluene are more bioaccessible than other common dietary carotenoids, probably due to their distinctive chemical structure. About the in vivo antioxidant capacity of phytoene and phytofluene, it was observed that they both exerted antioxidant effects at certain doses. In conclusion, evidence on the importance of phytoene and phytofluene as dietary easily bioavailable and antioxidant carotenoids has been obtained in recent studies from our group, which can be important shortly to innovate in health-promotion through the development of functional foods and related products.Keywords: carotenoids, health, functional foods, nutrition, phytoene, phytofluene
Procedia PDF Downloads 103120 Human 3D Metastatic Melanoma Models for in vitro Evaluation of Targeted Therapy Efficiency
Authors: Delphine Morales, Florian Lombart, Agathe Truchot, Pauline Maire, Pascale Vigneron, Antoine Galmiche, Catherine Lok, Muriel Vayssade
Abstract:
Targeted therapy molecules are used as a first-line treatment for metastatic melanoma with B-Raf mutation. Nevertheless, these molecules can cause side effects to patients and are efficient on 50 to 60 % of them. Indeed, melanoma cell sensitivity to targeted therapy molecules is dependent on tumor microenvironment (cell-cell and cell-extracellular matrix interactions). To better unravel factors modulating cell sensitivity to B-Raf inhibitor, we have developed and compared several melanoma models: from metastatic melanoma cells cultured as monolayer (2D) to a co-culture in a 3D dermal equivalent. Cell response was studied in different melanoma cell lines such as SK-MEL-28 (mutant B-Raf (V600E), sensitive to Vemurafenib), SK-MEL-3 (mutant B-Raf (V600E), resistant to Vemurafenib) and a primary culture of dermal human fibroblasts (HDFn). Assays have initially been performed in a monolayer cell culture (2D), then a second time on a 3D dermal equivalent (dermal human fibroblasts embedded in a collagen gel). All cell lines were treated with Vemurafenib (a B-Raf inhibitor) for 48 hours at various concentrations. Cell sensitivity to treatment was assessed under various aspects: Cell proliferation (cell counting, EdU incorporation, MTS assay), MAPK signaling pathway analysis (Western-Blotting), Apoptosis (TUNEL), Cytokine release (IL-6, IL-1α, HGF, TGF-β, TNF-α) upon Vemurafenib treatment (ELISA) and histology for 3D models. In 2D configuration, the inhibitory effect of Vemurafenib on cell proliferation was confirmed on SK-MEL-28 cells (IC50=0.5 µM), and not on the SK-MEL-3 cell line. No apoptotic signal was detected in SK-MEL-28-treated cells, suggesting a cytostatic effect of the Vemurafenib rather than a cytotoxic one. The inhibition of SK-MEL-28 cell proliferation upon treatment was correlated with a strong expression decrease of phosphorylated proteins involved in the MAPK pathway (ERK, MEK, and AKT/PKB). Vemurafenib (from 5 µM to 10 µM) also slowed down HDFn proliferation, whatever cell culture configuration (monolayer or 3D dermal equivalent). SK-MEL-28 cells cultured in the dermal equivalent were still sensitive to high Vemurafenib concentrations. To better characterize all cell population impacts (melanoma cells, dermal fibroblasts) on Vemurafenib efficacy, cytokine release is being studied in 2D and 3D models. We have successfully developed and validated a relevant 3D model, mimicking cutaneous metastatic melanoma and tumor microenvironment. This 3D melanoma model will become more complex by adding a third cell population, keratinocytes, allowing us to characterize the epidermis influence on the melanoma cell sensitivity to Vemurafenib. In the long run, the establishment of more relevant 3D melanoma models with patients’ cells might be useful for personalized therapy development. The authors would like to thank the Picardie region and the European Regional Development Fund (ERDF) 2014/2020 for the funding of this work and Oise committee of "La ligue contre le cancer".Keywords: 3D human skin model, melanoma, tissue engineering, vemurafenib efficiency
Procedia PDF Downloads 304119 A Case of Bilateral Vulval Abscess with Pelvic Fistula in an Immunocompromised Patient with Colostomy: A Diagnostic Challenge
Authors: Paul Feyi Waboso
Abstract:
This case report presents a 57-year-old female patient with a history of colon cancer, colostomy, and immunocompromise, who presented with an unusual bilateral vulval abscess, more prominent on the left side. Due to the atypical presentation, an MRI was performed, revealing a pelvic collection and a fistulous connection between the pelvis and vulva. This finding prompted an urgent surgical intervention. This case highlights the diagnostic and therapeutic challenges of managing complex abscesses and fistulas in immunocompromised patients. Introduction: Vulval abscesses in immunocompromised individuals can present with atypical features and may be associated with complex pathologies. Patients with a history of cancer, colostomy, and immunocompromise are particularly prone to infections and may present with unusual manifestations. This report discusses a case of a large bilateral vulval abscess with an underlying pelvic fistula, emphasizing the importance of advanced imaging in cases with atypical presentations. Case Presentation: A 57-year-old female with a known history of colon cancer, treated with colostomy, presented with severe pain and swelling in the vulval area. Physical examination revealed bilateral vulval swelling, with the abscess on the left side appearing larger and more pronounced than on the right. Given her immunocompromised status and the unusual nature of the presentation, we requested an MRI of the pelvis, suspecting an underlying pathology beyond a typical abscess. Investigations: MRI imaging revealed a significant pelvic collection and identified a fistulous tract between the pelvis and the vulva. This confirmed that the vulval abscess was connected to a deeper pelvic infection, necessitating urgent intervention. Management: After consultation with the multidisciplinary team (MDT), it was agreed that the patient required surgical intervention, having had 48 hours of antibiotics. The patient underwent evacuation of the left-sided vulval abscess under spinal anesthesia. During surgery, the pelvic collection was drained of 200 ml of pus. Outcome and Follow-Up: Postoperative recovery was closely monitored due to the patient’s immunocompromised state. Follow-up imaging and clinical evaluation showed improvement in symptoms, with gradual resolution of infection. The patient was scheduled for regular follow-up visits to monitor for recurrence or further complications. Discussion: Bilateral vulval abscesses are uncommon and, in an immunocompromised patient, warrant thorough investigation to rule out deeper infectious or fistulous connections. This case underscores the utility of MRI in identifying complex fistulous tracts and highlights the importance of a multidisciplinary approach in managing such high-risk patients. Conclusion: This case illustrates a rare presentation of bilateral vulval abscess with an associated pelvic fistula.Keywords: vulval abscess, MDT team, colon cancer with pelvic fistula, vulval skin condition
Procedia PDF Downloads 18118 Climate Change Law and Transnational Corporations
Authors: Manuel Jose Oyson
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations
Procedia PDF Downloads 350117 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 137116 Impact of Ethiopia's Productive Safety Net Program on Household Dietary Diversity and Child Nutrition in Rural Ethiopia
Authors: Tagel Gebrehiwot, Carolina Castilla
Abstract:
Food insecurity and child malnutrition are among the most critical issues in Ethiopia. Accordingly, different reform programs have been carried to improve household food security. The Food Security Program (FSP) (among others) was introduced to combat the persistent food insecurity problem in the country. The FSP combines a safety net component called the Productive Safety Net Program (PSNP) started in 2005. The goal of PSNP is to offer multi-annual transfers, such as food, cash or a combination of both to chronically food insecure households to break the cycle of food aid. Food or cash transfers are the main elements of PSNP. The case for cash transfers builds on the Sen’s analysis of ‘entitlement to food’, where he argues that restoring access to food by improving demand is a more effective and sustainable response to food insecurity than food aid. Cash-based schemes offer a greater choice of use of the transfer and can allow a greater diversity of food choice. It has been proven that dietary diversity is positively associated with the key pillars of food security. Thus, dietary diversity is considered as a measure of household’s capacity to access a variety of food groups. Studies of dietary diversity among Ethiopian rural households are somewhat rare and there is still a dearth of evidence on the impact of PSNP on household dietary diversity. In this paper, we examine the impact of the Ethiopia’s PSNP on household dietary diversity and child nutrition using panel household surveys. We employed different methodologies for identification. We exploit the exogenous increase in kebeles’ PSNP budget to identify the effect of the change in the amount of money households received in transfers between 2012 and 2014 on the change in dietary diversity. We use three different approaches to identify this effect: two-stage least squares, reduced form IV, and generalized propensity score matching using a continuous treatment. The results indicate the increase in PSNP transfers between 2012 and 2014 had no effect on household dietary diversity. Estimates for different household dietary indicators reveal that the effect of the change in the cash transfer received by the household is statistically and economically insignificant. This finding is robust to different identification strategies and the inclusion of control variables that determine eligibility to become a PSNP beneficiary. To identify the effect of PSNP participation on children height-for-age and stunting we use a difference-in-difference approach. We use children between 2 and 5 in 2012 as a baseline because by then they have achieved long-term failure to grow. The treatment group comprises children ages 2 to 5 in 2014 in PSNP participant households. While changes in height-for-age take time, two years of additional transfers among children who were not born or under the age of 2-3 in 2012 have the potential to make a considerable impact on reducing the prevalence of stunting. The results indicate that participation in PSNP had no effect on child nutrition measured as height-for-age or probability of beings stunted, suggesting that PSNP should be designed in a more nutrition-sensitive way.Keywords: continuous treatment, dietary diversity, impact, nutrition security
Procedia PDF Downloads 335115 Defining the Tipping Point of Tolerance to CO₂-Induced Ocean Acidification in Larval Dusky Kob Argyrosomus japonicus (Pisces: Sciaenidae)
Authors: Pule P. Mpopetsi, Warren M. Potts, Nicola James, Amber Childs
Abstract:
Increased CO₂ production and the consequent ocean acidification (OA) have been identified as one of the greatest threats to both calcifying and non-calcifying marine organisms. Traditionally, marine fishes, as non-calcifying organisms, were considered to have a higher tolerance to near-future OA conditions owing to their well-developed ion regulatory mechanisms. However, recent studies provide evidence to suggest that they may not be as resilient to near-future OA conditions as previously thought. In addition, earlier life stages of marine fishes are thought to be less tolerant than juveniles and adults of the same species as they lack well-developed ion regulatory mechanisms for maintaining homeostasis. This study focused on the effects of near-future OA on larval Argyrosomus japonicus, an estuarine-dependent marine fish species, in order to identify the tipping point of tolerance for the larvae of this species. Larval A. japonicus in the present study were reared from the egg up to 22 days after hatching (DAH) under three treatments. The three treatments, (pCO₂ 353 µatm; pH 8.03), (pCO₂ 451 µatm; pH 7.93) and (pCO₂ 602 µatm; pH 7.83) corresponded to levels predicted to occur in year 2050, 2068 and 2090 respectively under the Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (IPCC RCP) 8.5 model. Size-at-hatch, growth, development, and metabolic responses (standard and active metabolic rates and metabolic scope) were assessed and compared between the three treatments throughout the rearing period. Five earlier larval life stages (hatchling – flexion/post-flexion) were identified by the end of the experiment. There were no significant differences in size-at-hatch (p > 0.05), development or the active metabolic (p > 0.05) or metabolic scope (p > 0.05) of fish in the three treatments throughout the study. However, the standard metabolic rate was significantly higher in the year 2068 treatment but only at the flexion/post-flexion stage which could be attributed to differences in developmental rates (including the development of the gills) between the 2068 and the other two treatments. Overall, the metabolic scope was narrowest in the 2090 treatment but varied according to life stage. Although not significantly different, metabolic scope in the 2090 treatment was noticeably lower at the flexion stage compared to the other two treatments, and the development appeared slower, suggesting that this could be the stage most prone to OA. The study concluded that, in isolation, OA levels predicted to occur between 2050 and 2090 will not negatively affect size-at-hatch, growth, development, and metabolic responses of larval A. japonicus up to 22 DAH (flexion/post-flexion stage). The present study also identified the tipping point of tolerance (where negative impacts will begin) in larvae of the species to be between the years 2090 and 2100.Keywords: climate change, ecology, marine, ocean acidification
Procedia PDF Downloads 134114 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present
Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir
Abstract:
Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving
Procedia PDF Downloads 73113 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India
Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.
Abstract:
Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)
Procedia PDF Downloads 149112 Challenges in the Last Mile of the Global Guinea Worm Eradication Program: A Systematic Review
Authors: Getahun Lemma
Abstract:
Introduction Guinea Worm Disease (GWD), also known as dracunculiasisis, is one of the oldest diseases in the history of mankind. Dracunculiasis is caused by a parasitic nematode, Dracunculus medinensis. Infection is acquired by drinking contaminated water with copepods containing infective Guinea Worm (GW) larvae). Almost one year after the infection, the worm usually emerges out through the skin on a lower, causing severe pain and disabilities. Although there is no effective drug or vaccine against the disease, the chain of transmission can be effectively prevented with simple and cost effective public health measures. Death due to dracunculiasis is very rare. However, it results in a wide range of physical, social and economic sequels. The disease is usually common in the rural, remote places of Sub-Saharan African countries among the marginalized societies. Currently, GWD is one of the neglected tropical diseases, which is on the verge of eradication. The global Guinea Worm Eradication Program (GWEP) was started in 1980. Since then, the program has achieved a tremendous success in reducing the global burden and number of GW case from 3.5 million to only 28 human cases at the end of 2018. However, it has recently been shown that not only humans can become infected, with a total of 1,105 animal infections have been reported at the end of 2018. Therefore, the objective of this study was to identify the existing challenges in the last mile of the GWEP in order To inform Policy makers and stakeholders on potential measures to finally achieve eradication. Method Systematic literature review on articles published from January 1, 2000 until May 30, 2019. Papers listed in Cochrane Library, Google Scholar, ProQuest PubMed and Web of Science databases were searched and reviewed. Results Twenty-five articles met inclusion criteria of the study and were selected for analysis. Hence, relevant data were extracted, grouped and descriptively analyzed. Results showed the main challenges complicating the last mile of global GWEP: 1. Unusual mode of transmission; 2. Rising animal Guinea Worm infection; 3. Suboptimal surveillance; 4. Insecurity; 5. Inaccessibility; 6. Inadequate safe water points; 7. Migration; 8. Poor case containment measures, 9. Ecological changes; and 10. New geographic foci of the disease. Conclusion This systematic review identified that most of the current challenges in the GWEP have been present since the start of the campaign. However, the recent change in epidemiological patterns and nature of GWD in the last remaining endemic countries illustrates a new twist in the global GWEP. Considering the complex nature of the current challenges, there seems to be a need for a more coordinated and multidisciplinary approach of GWD prevention and control measures in the last mile of the campaign. These new strategies would help to make history by eradicating dracunculiasis as the first ever parasitic disease.Keywords: dracunculiasis, eradication program, guinea worm, last mile
Procedia PDF Downloads 131111 Anisakidosis in Turkey: Serological Survey and Risk for Humans
Authors: E. Akdur Öztürk, F. İrvasa Bilgiç, A. Ludovisi , O. Gülbahar, D. Dirim Erdoğan, M. Korkmaz, M. Á. Gómez Morales
Abstract:
Anisakidosis is a zoonotic human fish-borne parasitic disease caused by accidental ingestion of anisakid third-stage larvae (L3) of members of the Anisakidae family present in infected marine fish or cephalopods. Infection with anisakid larvae can lead to gastric, intestinal, extra-gastrointestinal and gastroallergic forms of the disease. Anisakid parasites have been reported in almost all seas, particularly in the Mediterranean Sea. There is a remarkably high level of risk exposure to these zoonotic parasites as they are present in economically and ecologically important fish of Europe. Anisakid L3 larvae have been also detected in several fish species from the Aegean Sea. Turkey is a peninsular country surrounded by Black, Aegean and the Mediterranean Sea. In this country, fishing habit and fishery product consumption are highly common. In recent years, there was also an increase in the consumption of raw fish due to the increasing interest in the cuisine of the Far East countries. In different regions of Turkey, A. simplex (inMerluccius Merluccius Scomber japonicus, Trachurus mediterraneus, Sardina pilchardus, Engraulis encrasicolus, etc.), Anisakis spp., Contraceucum spp., Pseudoterronova spp. and, C. aduncum were identified as well. Although it is accepted both the presence of anisakid parasites in fish and fishery products in Turkey and the presence of Turkish people with allergic manifestations after fish consumption, there are no reports of human anisakiasis in this country. Given the high prevalence of anisakid parasites in the country, the absence of reports is likely not due to the absence of clinical cases rather to the unavailability of diagnostic tools and the low awareness of the presence of this infection. The aim of the study was to set up an IgE-Western Blot (WB) based test to detect the anisakidosis sensitization among Turkish people with a history of allergic manifestation related to fish consumption. To this end, crude worm antigens (CWA) and allergen enriched fraction (50-66% ) were prepared from L3 of A. simplex (s.l.) collected from Lepidopus caudatus fished in the Mediterranean Sea. These proteins were electrophoretically separated and transferred into the nitrocellulose membranes. By WB, specific proteins recognized by positive control serum samples from sensitized patients were visualized on nitrocellulose membranes by a colorimetric reaction. The CWA and 50–66% fraction showed specific bands, mainly due to Ani s 1 (20-22 kD) and Ani s 4 (9-10 kD). So far, a total of 7 serum samples from people with allergic manifestation and positive skin prick test (SPT) after fish consumption, have been tested and all of them resulted negative by WB, indicating the lack of sensitization to anisakids. This preliminary study allowed to set up a specific test and evidence the lack of correlation between both tests, SPT and WB. However, the sample size should be increased to estimate the anisakidosis burden in Turkish people.Keywords: anisakidosis, fish parasite, serodiagnosis, Turkey
Procedia PDF Downloads 141110 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.Keywords: corporate governance, ERM, risk governance, risk management
Procedia PDF Downloads 252109 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?
Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire
Abstract:
The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.Keywords: basel 3, fair value, securitization, long term investment, banks, insurers
Procedia PDF Downloads 291108 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems
Authors: Ramprasad Srinivasan
Abstract:
Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation
Procedia PDF Downloads 66107 Preliminary Studies on Poloxamer-Based Hydrogels with Oregano Essential Oil as Potential Topical Treatment of Cutaneous Papillomas
Authors: Ana Maria Muț, Georgeta Coneac, Ioana Olariu, Ștefana Avram, Ioana Zinuca Pavel, Ionela Daliana Minda, Lavinia Vlaia, Cristina Adriana Dehelean, Corina Danciu
Abstract:
Oregano essential oil is obtained from different parts of the plant Origanum vulgare (fam. Lamiaceae) and carvacrol and thymol are primary components, widely recognized for their antimicrobial activity, as well as their antiviral and antifungal properties. Poloxamers are triblock copolymers (Pluronic®), formed of three non-ionic blocks with a hydrophobic polyoxypropylene central chain flanked by two polyoxyethylene hydrophilic chains. They are known for their biocompatibility, sensitivity to temperature changes (sol-to-gel transition of aqueous solution with temperature increase), but also for their amphiphilic and surface active nature determining the formation of micelles, useful for solubilization of different hydrophobic compounds such as the terpenes and terpenoids contained in essential oils. Thus, these polymers, listed in European and US Pharmacopoeia and approved by FDA, are widely used as solubilizers and gelling agents for various pharmaceutical preparations, including topical hydrogels. The aim of this study was to investigate the posibility of solubilizing oregano essential oil (OEO) in polymeric micelles using polyoxypropylene (PPO)-polyoxyethylene (PEO)-polyoxypropylene (PPO) triblock polymers to obtain semisolid systems suitable for topical application. A formulation screening was performed, using Pluronic® F-127 in concentration of 20%, Pluronic® L-31, Pluronic® L-61 and Pluronic® L-62 in concentration of 0.5%, 0.8% respectively 1% to obtain the polymeric micelles-based systems. Then, to each selected system, with or without 10% absolute ethanol, 5% or 8% OEO was added. The obtained transparent poloxamer-based hydrogels containing solubilized OEO were further evaluated for pH, rheological characteristics (flow behaviour, viscosity, consistency and spreadability), using consacrated techniques like potentiometric titration, stationary shear flow test, penetrometric method and parallel plate method. Also, in vitro release and permeation of carvacrol from the hydrogels was carried out, using vertical diffusion cells and synthetic hydrophilic membrane and porcine skin respectively. The pH values and rheological features of all tested formulations were in accordance with official requirements for semisolid cutaneous preparations. But, the formulation containing 0.8% Pluronic® L-31, 10% absolute ethanol, 8% OEO and water and the formulation with 1% Pluronic® L-31, 5% OEO and water, produced the highest cumulative amounts of carvacrol released/permeated through the membrane. The present study demonstrated that oregano essential oil can be successfully solubilized in the investigated poloxamer-based hydrogels. These systems can be further investigated as potential topical therapy for cutaneous papillomas. Funding: This research was funded by Project PN-III-P1-1.1-TE2019-0130, Contract number TE47, Romania.Keywords: oregano essential oil, carvacrol, poloxamer, topical hydrogels
Procedia PDF Downloads 113106 Understanding the Role of Social Entrepreneurship in Building Mobility of a Service Transportation Models
Authors: Liam Fassam, Pouria Liravi, Jacquie Bridgman
Abstract:
Introduction: The way we travel is rapidly changing, car ownership and use are declining among young people and those residents in urban areas. Also, the increasing role and popularity of sharing economy companies like Uber highlight a movement towards consuming transportation solutions as a service [Mobility of a Service]. This research looks to bridge the knowledge gap that exists between city mobility, smart cities, sharing economy and social entrepreneurship business models. Understanding of this subject is crucial for smart city design, as access to affordable transport has been identified as a contributing factor to social isolation leading to issues around health and wellbeing. Methodology: To explore the current fit vis-a-vis transportation business models and social impact this research undertook a comparative analysis between a systematic literature review and a Delphi study. The systematic literature review was undertaken to gain an appreciation of the current academic thinking on ‘social entrepreneurship and smart city mobility’. The second phase of the research initiated a Delphi study across a group of 22 participants to review future opinion on ‘how social entrepreneurship can assist city mobility sharing models?’. The Delphi delivered an initial 220 results, which once cross-checked for duplication resulted in 130. These 130 answers were sent back to participants to score importance against a 5-point LIKERT scale, enabling a top 10 listing of areas for shared user transports in society to be gleaned. One further round (4) identified no change in the coefficient of variant thus no further rounds were required. Findings: Initial results of the literature review returned 1,021 journals using the search criteria ‘social entrepreneurship and smart city mobility’. Filtering allied to ‘peer review’, ‘date’, ‘region’ and ‘Chartered associated of business school’ ranking proffered a resultant journal list of 75. Of these, 58 focused on smart city design, 9 on social enterprise in cityscapes, 6 relating to smart city network design and 3 on social impact, with no journals purporting the need for social entrepreneurship to be allied to city mobility. The future inclusion factors from the Delphi expert panel indicated that smart cities needed to include shared economy models in their strategies. Furthermore, social isolation born by costs of infrastructure needed addressing through holistic A-political social enterprise models, and a better understanding of social benefit measurement is needed. Conclusion: In investigating the collaboration between key public transportation stakeholders, a theoretical model of social enterprise transportation models that positively impact upon the smart city needs of reduced transport poverty and social isolation was formed. As such, the research has identified how a revised business model of Mobility of a Service allied to a social entrepreneurship can deliver impactful measured social benefits associated to smart city design existent research.Keywords: social enterprise, collaborative transportation, new models of ownership, transport social impact
Procedia PDF Downloads 140105 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 66104 Ahmad Sabzi Balkhkanloo, Motahareh Sadat Hashemi, Seyede Marzieh Hosseini, Saeedeh Shojaee-Aliabadi, Leila Mirmoghtadaie
Authors: Elyria Kemp, Kelly Cowart, My Bui
Abstract:
According to the National Institute of Mental Health, an estimated 31.9% of adolescents have had an anxiety disorder. Several environmental factors may help to contribute to high levels of anxiety and depression in young people (i.e., Generation Z, Millennials). However, as young people negotiate life on social media, they may begin to evaluate themselves using excessively high standards and adopt self-perfectionism tendencies. Broadly defined, self-perfectionism involves very critical evaluations of the self. Perfectionism may also come from others and may manifest as socially prescribed perfectionism, and young adults are reporting higher levels of socially prescribed perfectionism than previous generations. This rising perfectionism is also associated with anxiety, greater physiological reactivity, and a sense of social disconnection. However, theories from psychology suggest that improvement in emotion regulation can contribute to enhanced psychological and emotional well-being. Emotion regulation refers to the ways people manage how and when they experience and express their emotions. Cognitive reappraisal and expressive suppression are common emotion regulation strategies. Cognitive reappraisal involves changing the meaning of a stimulus that involves construing a potentially emotion-eliciting situation in a way that changes its emotional impact. By contrast, expressive suppression involves inhibiting the behavioral expression of emotion. The purpose of this research is to examine the efficacy of social marketing initiatives which promote emotion regulation strategies to help young adults regulate their emotions. In Study 1 a single factor (emotional regulation strategy: a cognitive reappraisal, expressive, control) between-subjects design was conducted using an online, non-student consumer panel (n=96). Sixty-eight percent of participants were male, and 32% were female. Study participants belonged to the Millennial and Gen Z cohort, ranging in age from 22 to 35 (M=27). Participants were first told to spend at least three minutes writing about a public speaking appearance which made them anxious. The purpose of this exercise was to induce anxiety. Next, participants viewed one of three advertisements (randomly assigned) which promoted an emotion regulation strategy—cognitive reappraisal, expressive suppression, or an advertisement non-emotional in nature. After being exposed to one of the ads, participants responded to a measure composed of two items to access their emotional state and the efficacy of the messages in fostering emotion management. Findings indicated that individuals in the cognitive reappraisal condition (M=3.91) exhibited the most positive feelings and more effective emotion regulation than the expressive suppression (M=3.39) and control conditions (M=3.72, F(1,92) = 3.3, p<.05). Results from this research can be used by institutions (e.g., schools) in taking a leadership role in attacking anxiety and other mental health issues. Social stigmas regarding mental health can be removed and a more proactive stance can be taken in promoting healthy coping behaviors and strategies to manage negative emotions.Keywords: emotion regulation, anxiety, social marketing, generation z
Procedia PDF Downloads 205103 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 100102 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping
Authors: Emily Rowe
Abstract:
Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables
Procedia PDF Downloads 153101 Consumers Attitude toward the Latest Trends in Decreasing Energy Consumption of Washing Machine
Authors: Farnaz Alborzi, Angelika Schmitz, Rainer Stamminger
Abstract:
Reducing water temperatures in the wash phase of a washing programme and increasing the overall cycle durations are the latest trends in decreasing energy consumption of washing programmes. Since the implementation of the new energy efficiency classes in 2010, manufacturers seem to apply the aforementioned washing strategy with lower temperatures combined with longer programme durations extensively to realise energy-savings needed to meet the requirements of the highest energy efficiency class possible. A semi-representative on-line survey in eleven European countries (Czech Republic, Finland, France, Germany, Hungary, Italy, Poland, Romania, Spain, Sweden and the United Kingdom) was conducted by Bonn University in 2015 to shed light on consumer opinion and behaviour regarding the effects of the lower washing temperature and longer cycle duration in laundry washing on consumers’ acceptance of the programme. The risk of the long wash cycle is that consumers might not use the energy efficient Standard programmes and will think of this option as inconvenient and therefore switch to shorter, but more energy consuming programmes. Furthermore, washing in a lower temperature may lead to the problem of cross-contamination. Washing behaviour of over 5,000 households was studied in this survey to provide support and guidance for manufacturers and policy designers. Qualified households were chosen following a predefined quota: -Involvement in laundry washing: substantial, -Distribution of gender: more than 50 % female , -Selected age groups: -20–39 years, -40–59 years, -60–74 years, -Household size: 1, 2, 3, 4 and more than 4 people. Furthermore, Eurostat data for each country were used to calculate the population distribution in the respective age class and household size as quotas for the consumer survey distribution in each country. Before starting the analyses, the validity of each dataset was controlled with the aid of control questions. After excluding the outlier data, the number of the panel diminished from 5,100 to 4,843. The primary outcome of the study is European consumers are willing to save water and energy in a laundry washing but reluctant to use long programme cycles since they don’t believe that the long cycles could be energy-saving. However, the results of our survey don’t confirm that there is a relation between frequency of using Standard cotton (Eco) or Energy-saving programmes and the duration of the programmes. It might be explained by the fact that the majority of washing programmes used by consumers do not take so long, perhaps consumers just choose some additional time reduction option when selecting those programmes and this finding might be changed if the Energy-saving programmes take longer. Therefore, it may be assumed that introducing the programme duration as a new measure on a revised energy label would strongly influence the consumer at the point of sale. Furthermore, results of the survey confirm that consumers are more willing to use lower temperature programmes in order to save energy than accepting longer programme cycles and majority of them accept deviation from the nominal temperature of the programme as long as the results are good.Keywords: duration, energy-saving, standard programmes, washing temperature
Procedia PDF Downloads 221100 Effect of Rolling Shear Modulus and Geometric Make up on the Out-Of-Plane Bending Performance of Cross-Laminated Timber Panel
Authors: Md Tanvir Rahman, Mahbube Subhani, Mahmud Ashraf, Paul Kremer
Abstract:
Cross-laminated timber (CLT) is made from layers of timber boards orthogonally oriented in the thickness direction, and due to this, CLT can withstand bi-axial bending in contrast with most other engineered wood products such as laminated veneer lumber (LVL) and glued laminated timber (GLT). Wood is cylindrically anisotropic in nature and is characterized by significantly lower elastic modulus and shear modulus in the planes perpendicular to the fibre direction, and is therefore classified as orthotropic material and is thus characterized by 9 elastic constants which are three elastic modulus in longitudinal direction, tangential direction and radial direction, three shear modulus in longitudinal tangential plane, longitudinal radial plane and radial tangential plane and three Poisson’s ratio. For simplification, timber materials are generally assumed to be transversely isotropic, reducing the number of elastic properties characterizing it to 5, where the longitudinal plane and radial planes are assumed to be planes of symmetry. The validity of this assumption was investigated through numerical modelling of CLT with both orthotropic mechanical properties and transversely isotropic material properties for three softwood species, which are Norway spruce, Douglas fir, Radiata pine, and three hardwood species, namely Victorian ash, Beech wood, and Aspen subjected to uniformly distributed loading under simply supported boundary condition. It was concluded that assuming the timber to be transversely isotropic results in a negligible error in the order of 1 percent. It was also observed that along with longitudinal elastic modulus, ratio of longitudinal shear modulus (GL) and rolling shear modulus (GR) has a significant effect on a deflection for CLT panels of lower span to depth ratio. For softwoods such as Norway spruce and Radiata pine, the ratio of longitudinal shear modulus, GL to rolling shear modulus GR is reported to be in the order of 12 to 15 times in literature. This results in shear flexibility in transverse layers leading to increased deflection under out-of-plane loading. The rolling shear modulus of hardwoods has been found to be significantly higher than those of softwoods, where the ratio between longitudinal shear modulus to rolling shear modulus as low as 4. This has resulted in a significant rise in research into the manufacturing of CLT from entirely from hardwood, as well as from a combination of softwood and hardwoods. The commonly used beam theory to analyze the performance of CLT panels under out-of-plane loads are the Shear analogy method, Gamma method, and k-method. The shear analogy method has been found to be the most effective method where shear deformation is significant. The effect of the ratio of longitudinal shear modulus and rolling shear modulus of cross-layer on the deflection of CLT under uniformly distributed load with respect to its length to depth ratio was investigated using shear analogy method. It was observed that shear deflection is reduced significantly as the ratio of the shear modulus of the longitudinal layer and rolling shear modulus of cross-layer decreases. This indicates that there is significant room for improvement of the bending performance of CLT through developing hybrid CLT from a mix of softwood and hardwood.Keywords: rolling shear modulus, shear deflection, ratio of shear modulus and rolling shear modulus, timber
Procedia PDF Downloads 127