Search results for: central public sector enterprises in India
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13164

Search results for: central public sector enterprises in India

1494 Patients' Out-Of-Pocket Expenses-Effectiveness Analysis of Presurgical Teledermatology

Authors: Felipa De Mello-Sampayo

Abstract:

Background: The aim of this study is to undertake, from a patient perspective, an economic analysis of presurgical teledermatology, comparing it with a conventional referral system. Store-and-forward teledermatology allows surgical planning, saving both time and number of visits involving travel, thereby reducing patients’ out-of-pocket expenses, i.e., costs that patients incur when traveling to and from health providers for treatment, visits’ fees, and the opportunity cost of time spent in visits. Method: Patients’ out-of-pocket expenses-effectiveness of presurgical teledermatology were analyzed in the setting of a public hospital during two years. The mean delay in surgery was used to measure effectiveness. The teledermatology network covering the area served by the Hospital Garcia da Horta (HGO), Portugal, linked the primary care centers of 24 health districts with the hospital’s dermatology department. The patients’ opportunity cost of visits, travel costs, and visits’ fee of each presurgical modality (teledermatology and conventional referral), the cost ratio between the most and least expensive alternative, and the incremental cost-effectiveness ratio were calculated from initial primary care visit until surgical intervention. Two groups of patients: those with squamous cell carcinoma and those with basal cell carcinoma were distinguished in order to compare the effectiveness according to the dermatoses. Results: From a patient perspective, the conventional system was 2.15 times more expensive than presurgical teledermatology. Teledermatology had an incremental out-of-pocket expenses-effectiveness ratio of €1.22 per patient and per day of delay avoided. This saving was greater in patients with squamous cell carcinoma than in patients with basal cell carcinoma. Conclusion: From a patient economic perspective, teledermatology used for presurgical planning and preparation is the dominant strategy in terms of out-of-pocket expenses-effectiveness than the conventional referral system, especially for patients with severe dermatoses.

Keywords: economic analysis, out-of-pocket expenses, opportunity cost, teledermatology, waiting time

Procedia PDF Downloads 136
1493 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 246
1492 Choosing Mountains Over the Beach: Evaluating the Effect of Altitude on Covid Brain Severity and Treatment

Authors: Kennedy Zinn, Chris Anderson

Abstract:

Chronic Covid syndrome (CCS) is a condition in which individuals who test positive for Covid-19 experience persistent symptoms after recovering from the virus. CCS affects every organ system, including the central nervous system. Neurological “long-haul” symptoms last from a few weeks to several months and include brain fog, chronic fatigue, dyspnea, mood dysregulation, and headaches. Data suggest that 10-30% of individuals testing positive for Covid-19 develop CCS. Current literature indicates a decreased quality of life in persistent symptoms. CCS is a pervasive and pernicious COVID-19 sequelae. More research is needed to understand risk factors, impact, and possible interventions. Research frequently cites cytokine storming as noteworthy etiology in CCS. Cytokine storming is a malfunctional immune response and facilitates multidimensional interconnected physiological responses. The most prominent responses include abnormal blood flow, hypoxia/hypoxemia, inflammation, and endothelial damage. Neurological impairments and pathogenesis in CCS parallel that of traumatic brain injury (TBI). Both exhibit impairments in memory, cognition, mood, sustained attention, and chronic fatigue. Evidence suggests abnormal blood flow, inflammation, and hypoxemia as shared causal factors. Cytokine storming is also typical in mTBI. The shared characteristics in symptoms and etiology suggest potential parallel routes of investigation that allow for better understanding of CCS. Research on the effect of altitude in mTBI varies. Literature finds decreased rates of concussions at higher altitudes. Other studies suggest that at a higher altitude, pre-existing mTBI symptoms are exacerbated. This may mean that in CCS, the geographical location where individuals live and the location where individuals experienced acute Covid-19 symptoms may influence the severity and risk of developing CCS. It also suggests that clinics which treat mTBI patients could also provide benefits for those with CCS. This study aims to examine the relationships between altitude and CCS as a risk factor and investigate the longevity and severity of symptoms in different altitudes. Existing patient data from a concussion clinic using fMRI scans and self-reported symptoms will be used for approximately 30 individuals with CCS symptoms. The association between acclimated altitude and CCS severity will be analyzed. Patients will be classified into low, medium, and high altitude groups and compared for differences on fMRI severity scores and self-reported measures. It is anticipated that individuals living in lower altitudes are at higher risk of developing more severe neuropsychological symptoms in CCS. It is also anticipated that a treatment approach for mTBI will also be beneficial to those with CCS.

Keywords: altitude, chronic covid syndrome, concussion, covid brain, EPIC treatment, fMRI, traumatic brain injury

Procedia PDF Downloads 130
1491 Influence of Online Media on Governance in Nigeria: The United States-Based Sahara Reporters as a Case Study

Authors: Sheriff Folarin, Oluwafunke Folarin, Hadassah Hussaini, Victor Jubril, Olaniyi Ayodele

Abstract:

Using a famous, unrestrained and fiery United States-based, Nigerian-owned Sahara Reporters as a case study, this paper examined the impact of online-based media on governance in Nigeria. The discourse is premised on the thesis that the internet has changed the face of journalism and that the mainstream but online-based media have made journalism more participatory than ever. Everyone who has something to say finds it easy to say it quickly and conveniently, unhinged or without being censored. This has made online journalism very popular and the number of online-based news platforms to be on the increase. As these platforms have given the citizens a means to interact and added to the content of the news, they have also succeeded in promoting partisanship. It thus becomes necessary to study the impact of the rabid news platform, Sahara Reporters, on governance in Africa’s biggest democracy, Nigeria. Few studies have examined the impact on governance of mainstream-online media platforms and those studies that did, have only focused on social media, such as Facebook and Twitter. This paper is a product of a bigger study, in which the research design entailed semi-structured interviews with participants from different sectors of the society and an analysis of contents from the Sahara Reporters website, from which data were collected. The findings revealed that through uncensored reporting and citizen participation on the platform of Sahara Reporters, there had been a significant people influence on governance in Nigeria, with government at two levels (national and state) sometimes shifting or yielding grounds, particularly from 2011-2016. The study also recognized the presence of counter-forces in the online community who want to discredit the information on the site. Through the lens of media dependency theory, the study concluded that the public now increasingly depends on online news media for information and the more news these media provide, the more the people depend on it, making it easy for them to influence governance.

Keywords: governance, media, online news, Sahara reporters

Procedia PDF Downloads 95
1490 To Corelate Thyroid Dysfunction in Pregnancy with Preterm Labor

Authors: Pushp Lata Sankhwar

Abstract:

INTRODUCTION: Maternal Hypothyroidism is the most frequent endocrine disorder in pregnancy and varies from 2.5% in the west to 11.0% in India. Maternal Hypothyroidism can have detrimental maternal effects like increased risk of preterm labor, PPROM leading to increased maternal morbidity and also on the neonate in the form of prematurity and its complications, prolonged hospital stay, neurological developmental problems, delayed milestones and mental retardation etc. Henceforth, the study was planned to evaluate the role of Hypothyroidism in preterm labor and its effect on neonates. AIMS AND OBJECTIVES: To Correlate Overt Hypothyroidism, Subclinical Hypothyroidism and Isolated Hypothyroxinemia With Preterm Labor and the neonatal outcome. Material and Methods: A case-control study of singleton pregnancy was performed over a year, in which a total of 500 patients presenting in the emergency with preterm labor were enrolled. The thyroid profile of these patients was sent at the time of admission, on the basis of which they were divided into Cases – Hypothyroidic mothers and Controls – Euthyroid mothers. The cases were further divided into subclinical, overt Hypothyroidism and isolated hypothyroxinemia. The neonatal outcome of these groups was also compared on the basis of the incidence and severity of neonatal morbidity, neonatal respiratory distress, the incidence of neonatal Hypothyroidism and early complications. The feto-maternal data was collected and analysed. RESULTS: In the study, a total of 500 antenatal patients with a history of preterm labor were enrolled, out of which 67 (13.8%) patients were found to be hypothyroid. The majority of the mothers had Subclinical Hypothyroidism (12.2%), followed by Overt Hypothyroidism seen in 1% of the mothers and isolated hypothyroxinemia in 0.6% of cases. The neonates of hypothyroid mothers had higher levels of cord blood TSH, and the mean cord blood TSH levels were highest in the case of neonates of mothers with Overt Hypothyroidism. The need for resuscitation of the neonates at the time of birth was higher in the case of neonates of hypothyroid mothers, especially with Subclinical Hypothyroidism. Also, it was found that the requirement of oxygen therapy in the form of oxygen by nasal prongs, oxygen by a hood, CPAP, CPAP along with surfactant therapy and mechanical ventilation along with surfactant therapy was significantly higher in the case of neonates of hypothyroid mothers. CONCLUSION: The results of our study imply that uncontrolled and untreated maternal Hypothyroidism may also lead to preterm delivery. The neonates of mothers with Hypothyroidism have higher cord blood TSH levels. The study also shows that there is an increased incidence and severity of respiratory distress in the neonates of hypothyroid mothers with untreated subclinical Hypothyroidism. Hence, we propose that routine screening for thyroid dysfunction in pregnant women should be done to prevent thyroid-related feto-maternal complications.

Keywords: high-risk pregnancy, thyroid, dysfunction, hypothyroidism, Preterm labor

Procedia PDF Downloads 88
1489 The Media, Language, and Political Stability in Nigeria: The Example of the Dog and the Baboon Politics

Authors: Attahiru Sifawa Ahmad

Abstract:

The media; electronic, print, and social, is playing very significant roles towards promoting political awareness and stability of any nation. However, for the media to play its role effectively, a clear and sound grasp of the language of communication is necessary. Otherwise, there is the tendency of the media spreading wrong and, or, misinterpreted information to the public, capable of generating rancour and political instability. One such clear misinterpretation or misrepresentation of information was the Hausa metaphorical expression, Kare Jinni Biri Jinni quoted from the statement made by Rtd. General Muhammadu Buhari, sometimes in April, 2013, while addressing his supporters from Niger State. In the political presentation of the term Kare - Jini Biri – Jini, quoted and translated by many print media in Nigeria, it was interpreted to mean; ‘The Dog and the Baboon will be soaked in blood’, denoting bloodshed and declaration of war. However, the term Kare - jini Biri - Jini, literally; the Dog with blood and the Baboon with blood, or, the Dog is bleeding the Baboon is bleeding, or, both the Dog and the Baboon sustained injuries. It is a metaphorical expression denoting a hot competition, and serious struggle, between two competing parties that are closer in strength and stamina. The expression got its origin among the hunting communities in traditional Hausa Societies. From experience, it was always not easy to wrestle and hunt Baboon by the Hunter’s Dog. In many instances, it ended a futile exercise, and even at instances whereby the latter hunted the former, it would be after a serious struggle with both two sustaining injuries. This paper seeks to highlight the poverty of vocabulary, and poor grasp of Nigerian languages among Journalists and young citizens in the country. The paper, therefore, advocated for the retention and effective teaching of the indigenous languages in primary and secondary school’s curriculums in Nigeria. The paper equally analysed the political origin of the print media in Nigeria, how since its first appearance, the print Media is being assigned very important political role by political elites in the country.

Keywords: Baboon, dog, media, politics

Procedia PDF Downloads 217
1488 Climate Change and Dengue Transmission in Lahore, Pakistan

Authors: Sadia Imran, Zenab Naseem

Abstract:

Dengue fever is one of the most alarming mosquito-borne viral diseases. Dengue virus has been distributed over the years exponentially throughout the world be it tropical or sub-tropical regions of the world, particularly in the last ten years. Changing topography, climate change in terms of erratic seasonal trends, rainfall, untimely monsoon early or late and longer or shorter incidences of either summer or winter. Globalization, frequent travel throughout the world and viral evolution has lead to more severe forms of Dengue. Global incidence of dengue infections per year have ranged between 50 million and 200 million; however, recent estimates using cartographic approaches suggest this number is closer to almost 400 million. In recent years, Pakistan experienced a deadly outbreak of the disease. The reason could be that they have the maximum exposure outdoors. Public organizations have observed that changing climate, especially lower average summer temperature, and increased vegetation have created tropical-like conditions in the city, which are suitable for Dengue virus growth. We will conduct a time-series analysis to study the interrelationship between dengue incidence and diurnal ranges of temperature and humidity in Pakistan, Lahore being the main focus of our study. We have used annual data from 2005 to 2015. We have investigated the relationship between climatic variables and dengue incidence. We used time series analysis to describe temporal trends. The result shows rising trends of Dengue over the past 10 years along with the rise in temperature & rainfall in Lahore. Hence this seconds the popular statement that the world is suffering due to Climate change and Global warming at different levels. Disease outbreak is one of the most alarming indications of mankind heading towards destruction and we need to think of mitigating measures to control epidemic from spreading and enveloping the cities, countries and regions.

Keywords: Dengue, epidemic, globalization, climate change

Procedia PDF Downloads 231
1487 Healthcare Fire Disasters: Readiness, Response and Resilience Strategies: A Real-Time Experience of a Healthcare Organization of North India

Authors: Raman Sharma, Ashok Kumar, Vipin Koushal

Abstract:

Healthcare facilities are always seen as places of haven and protection for managing the external incidents, but the situation becomes more difficult and challenging when such facilities themselves are affected from internal hazards. Such internal hazards are arguably more disruptive than external incidents affecting vulnerable ones, as patients are always dependent on supportive measures and are neither in a position to respond to such crisis situation nor do they know how to respond. The situation becomes more arduous and exigent to manage if, in case critical care areas like Intensive Care Units (ICUs) and Operating Rooms (OR) are convoluted. And, due to these complexities of patients’ in-housed there, it becomes difficult to move such critically ill patients on immediate basis. Healthcare organisations use different types of electrical equipment, inflammable liquids, and medical gases often at a single point of use, hence, any sort of error can spark the fire. Even though healthcare facilities face many fire hazards, damage caused by smoke rather than flames is often more severe. Besides burns, smoke inhalation is primary cause of fatality in fire-related incidents. The greatest cause of illness and mortality in fire victims, particularly in enclosed places, appears to be the inhalation of fire smoke, which contains a complex mixture of gases in addition to carbon monoxide. Therefore, healthcare organizations are required to have a well-planned disaster mitigation strategy, proactive and well prepared manpower to cater all types of exigencies resulting from internal as well as external hazards. This case report delineates a true OR fire incident in Emergency Operation Theatre (OT) of a tertiary care multispecialty hospital and details the real life evidence of the challenges encountered by OR staff in preserving both life and property. No adverse event was reported during or after this fire commotion, yet, this case report aimed to congregate the lessons identified of the incident in a sequential and logical manner. Also, timely smoke evacuation and preventing the spread of smoke to adjoining patient care areas by opting appropriate measures, viz. compartmentation, pressurisation, dilution, ventilation, buoyancy, and airflow, helped to reduce smoke-related fatalities. Henceforth, precautionary measures may be implemented to mitigate such incidents. Careful coordination, continuous training, and fire drill exercises can improve the overall outcomes and minimize the possibility of these potentially fatal problems, thereby making a safer healthcare environment for every worker and patient.

Keywords: healthcare, fires, smoke, management, strategies

Procedia PDF Downloads 65
1486 Multivariate Ecoregion Analysis of Nutrient Runoff From Agricultural Land Uses in North America

Authors: Austin P. Hopkins, R. Daren Harmel, Jim A Ippolito, P. J. A. Kleinman, D. Sahoo

Abstract:

Field-scale runoff and water quality data are critical to understanding the fate and transport of nutrients applied to agricultural lands and minimizing their off-site transport because it is at that scale that agricultural management decisions are typically made based on hydrologic, soil, and land use factors. However, regional influences such as precipitation, temperature, and prevailing cropping systems and land use patterns also impact nutrient runoff. In the present study, the recently-updated MANAGE (Measured Annual Nutrient loads from Agricultural Environments) database was used to conduct an ecoregion-level analysis of nitrogen and phosphorus runoff from agricultural lands in the North America. Specifically, annual N and P runoff loads for cropland and grasslands in North American Level II EPA ecoregions were presented, and the impact of factors such as land use, tillage, and fertilizer timing and placement on N and P runoff were analyzed. Specifically we compiled annual N and P runoff load data (i.e., dissolved, particulate, and total N and P, kg/ha/yr) for each Level 2 EPA ecoregion and for various agricultural management practices (i.e., land use, tillage, fertilizer timing, fertilizer placement) within each ecoregion to showcase the analyses possible with the data in MANAGE. Potential differences in N and P runoff loads were evaluated between and within ecoregions with statistical and graphical approaches. Non-parametric analyses, mainly Mann-Whitney tests were conducted on median values weighted by the site years of data utilizing R because the data were not normally distributed, and we used Dunn tests and box and whisker plots to visually and statistically evaluate significant differences. Out of the 50 total North American Ecoregions, 11 were found that had significant data and site years to be utilized in the analysis. When examining ecoregions alone, it was observed that ER 9.2 temperate prairies had a significantly higher total N at 11.7 kg/ha/yr than ER 9.4 South Central Semi Arid Prairies with a total N of 2.4. When examining total P it was observed that ER 8.5 Mississippi Alluvial and Southeast USA Coastal Plains had a higher load at 3.0 kg/ha/yr than ER 8.2 Southeastern USA Plains with a load of 0.25 kg/ha/yr. Tillage and Land Use had severe impacts on nutrient loads. In ER 9.2 Temperate Prairies, conventional tillage had a total N load of 36.0 kg/ha/yr while conservation tillage had a total N load of 4.8 kg/ha/yr. In all relevant ecoregions, when corn was the predominant land use, total N levels significantly increased compared to grassland or other grains. In ER 8.4 Ozark-Ouachita, Corn had a total N of 22.1 kg/ha/yr while grazed grassland had a total N of 2.9 kg/ha/yr. There are further intricacies of the interactions that agricultural management practices have on one another combined with ecological conditions and their impacts on the continental aquatic nutrient loads that still need to be explored. This research provides a stepping stone to further understanding of land and resource stewardship and best management practices.

Keywords: water quality, ecoregions, nitrogen, phosphorus, agriculture, best management practices, land use

Procedia PDF Downloads 75
1485 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 69
1484 Gender, Age, and Race Differences in Self-Reported Reading Attitudes of College Students

Authors: Jill Villarreal, Kristalyn Cooksey, Kai Lloyd, Daniel Ha

Abstract:

Little research has been conducted to examine college students' reading attitudes, including students' perceptions of reading behaviors and reading abilities. This is problematic, as reading assigned course material is a critical component to an undergraduate student's academic success. For this study, flyers were electronically disseminated to instructors at 24 public and 10 private U.S. institutions in “Reading-Intensive Departments” including Psychology, Sociology, Education, Business, and Communications. We requested the online survey be completed as an in-class activity during the fall 2019 and spring 2020 semesters. All participants voluntarily completed the questionnaire anonymously. Of the participants, 280 self-identified their race as Black and 280 self-identified their race as White. Of the participants, 177 self-identified their gender as Male and 383 self-identified their Gender as Female. Participants ranged in age from 18-24. Factor analysis found four dimensions resulting from the questions regarding reading. The first we interpret as “Reading Proficiency”, accounted for 19% of the variability. The second dimension was “Reading Anxiety” (15%), the third was “Textbook Reading Ability” (9%), and the fourth was “Reading Enjoyment” (8%). Linear models on each of these dimensions revealed no effect of Age, Gender, Race, or Income on “Reading proficiency”. The linear model of “Reading Anxiety” showed a significant effect of race (p = 0.02), with higher anxiety in white students, as well as higher reading anxiety in female students (p < 0.001). The model of “Textbook Reading Ability” found a significant effect of race (p < 0.001), with higher textbook problems in white students. The model of “Reading Enjoyment” showed significant effects of race (p = 0.013) with more enjoyment for white students, gender (p = 0.001) with higher enjoyment for female students, and age (p = 0.033) with older students showing higher enjoyment. These findings suggest that gender, age, and race are important factors in many aspects of college students' reading attitudes. Further research will investigate possible causes for these differences. In addition, the effectiveness of college-level programs to reduce reading anxiety, promote the reading of textbooks, and foster a love of reading will be assessed.

Keywords: age, college, gender, race, reading

Procedia PDF Downloads 147
1483 Relationships between Screen Time, Internet Addiction and Other Lifestyle Behaviors with Obesity among Secondary School Students in the Turkish Republic of Northern Cyprus

Authors: Ozen Asut, Gulifeiya Abuduxike, Imge Begendi, Mustafa O. Canatan, Merve Colak, Gizem Ozturk, Lara Tasan, Ahmed Waraiet, Songul A. Vaizoglu, Sanda Cali

Abstract:

Obesity among children and adolescents is one of the critical public health problems worldwide. Internet addiction is one of the sedentary behaviors that cause obesity due to the excessive screen time and reduced physical activities. We aimed to examine the relationships between the screen time, internet addiction and other lifestyle behaviors with obesity among high school students in the Near East College in Nicosia, Northern Cyprus. A cross-sectional study conducted among 469 secondary school students, mean age 11.95 (SD, 0.81) years. A self-administrated questionnaire was applied to assess the screen time and lifestyle behaviors. The Turkish adopted version of short-form of internet addiction test was used to assess internet addiction problems. Height and weight were measured to calculate BMI and classified based on the BMI percentiles for sex and age. Descriptive analysis, Chi-Square test, and multivariate regression analysis were done. Of all, 17.2% of the participants were overweight and obese, and 18.1% had internet addictions, while 40.7% of them reported having screen time more than two hours. After adjusting the analysis for age and sex, eating snacks while watching television (OR, 3.04; 95% CI, 1.28-7.21), self- perceived body weight (OR, 24.9; 95% CI, 9.64-64.25) and having a play station in the room (OR, 4.6; 95% CI, 1.85 - 11.42) were significantly associated with obesity. Screen time (OR, 4.68; 95% CI, 2.61-8.38; p=0.000) and having a computer in bedroom (OR, 1.7; 95% CI, 1.01- 2.87; p=0.046) were significantly associated with internet addiction, whereas parent’s compliant regarding the lengthy technology use (OR, 0.23; 95% CI, 0.11-0.46; p=0.000) was found to be a protective factor against internet addiction. Prolonged screen time, internet addiction, sedentary lifestyles, and reduced physical and social activities are interrelated, multi-dimensional factors that lead to obesity among children and adolescents. A family - school-based integrated approach should be implemented to tackle obesity problems.

Keywords: adolescents, internet addiction, lifestyle, Northern Cyprus, obesity, screen time

Procedia PDF Downloads 141
1482 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home

Authors: Laura M. F. Bertens

Abstract:

The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.

Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace

Procedia PDF Downloads 156
1481 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 186
1480 Preparedness Level of Disaster Management Institutions in Context of Floods in Delhi

Authors: Aditi Madan, Jayant Kumar Routray

Abstract:

Purpose: Over the years flood related risks have compounded due to increasing vulnerability caused by rapid urbanisation and growing population. This increase is an indication of the need for enhancing the preparedness of institutions to respond to floods. The study describes disaster management structure and its linkages with institutions involved in managing disasters. It addresses issues and challenges associated with readiness of disaster management institutions to respond to floods. It suggests policy options for enhancing the current state of readiness of institutions to respond by considering factors like institutional, manpower, financial, technical, leadership & networking, training and awareness programs, monitoring and evaluation. Methodology: The study is based on qualitative data with statements and outputs from primary and secondary sources to understand the institutional framework for disaster management in India. Primary data included field visits, interviews with officials from institutions managing disasters and the affected community to identify the challenges faced in engaging national, state, district and local level institutions in managing disasters. For focus group discussions, meetings were held with district project officers and coordinators, local officials, community based organisation, civil defence volunteers and community heads. These discussions were held to identify the challenges associated with preparedness to respond of institutions to floods. Findings: Results show that disasters are handled by district authority and the role of local institutions is limited to a reactive role during disaster. Data also indicates that although the existing institutional setup is well coordinated at the district level but needs improvement at the local level. Wide variations exist in awareness and perception among the officials engaged in managing disasters. Additionally, their roles and responsibilities need to be clearly defined with adequate budget and dedicated permanent staff for managing disasters. Institutions need to utilise the existing manpower through proper delegation of work. Originality: The study suggests that disaster risk reduction needs to focus more towards inclusivity of the local urban bodies. Wide variations exist in awareness and perception among the officials engaged in managing disasters. In order to ensure community participation, it is important to address their social and economic problems since such issues can overshadow attempts made for reducing risks. Thus, this paper suggests development of direct linkages among institutions and community for enhancing preparedness to respond to floods.

Keywords: preparedness, response, disaster, flood, community, institution

Procedia PDF Downloads 232
1479 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 72
1478 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

Authors: A. Mihoc, K. Cater

Abstract:

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.

Keywords: compass error, GPS, maritime navigation, mobile augmented reality

Procedia PDF Downloads 326
1477 Age-Associated Seroprevalence of Toxoplasma gondii in 10892 Pregnant Women in Senegal between 2016 and 2019

Authors: Ndiaye Mouhamadou, Seck Abdoulaye, Ndiaye Babacar, Diallo Thierno Abdoulaye, Diop Abdou, Seck Mame Cheikh, Diongue Khadim, Badiane Aida Sadikh, Diallo Mamadou Alpha, Kouedvidjin Ekoué, Ndiaye Daouda

Abstract:

Background: Toxoplasmosis is a parasite disease that presents high rates of gestational and congenital infection worldwide and is therefore considered a public health problem and a neglected disease. The aim of this study was to determine the seroprevalence of toxoplasmosis in pregnant women referred to the medical biology laboratory of the Pasteur Institute of Dakar (Senegal) between January 2014 and December 2019. Methodology: This was a cross-sectional, descriptive, retrospective study of 10892 blood samples from pregnant women aged 16 to 46 years. The Architect toxo IgG/IgM from Abbot Laboratories, which is a chemiluminescent microparticle immunoassay (CMIA), was used for the quantitative determination of antibodies against Toxoplasma gondii in human serum. Results: In total, over a period from January 2014 to December 2019, 10892 requests for toxoplasmosis serology in pregnant women were included. The age of the patients included in our series ranged from 16 to 46 years. The mean age was 31.2 ± 5.72 years. The overall seroprevalence of T. gondii in pregnant women was estimated to be 28.9% [28.0-29.7]. In a multivariate logistic regression analysis, after adjustment for a covariate such as a study period, pregnant women aged 36-46 years were more likely to carry IgG antibodies to T. gondii than pregnant women younger than 36 years. Conclusion: T. gondii seroprevalence was significantly higher in pregnant women older than 36 years, leaving younger women more susceptible to primary T. gondii infection and their babies to congenital toxoplasmosis. There will be a need to increase awareness of the risk factors for toxoplasmosis and its different modes of transmission in these high-risk groups, but this should be supported by epidemiologic studies of the distribution of risk factors for toxoplasmosis in pregnant women and women of childbearing age.

Keywords: toxoplasmosis, pregnancy, seroprevalence, Senegal

Procedia PDF Downloads 130
1476 Free and Open Source Software for BIM Workflow of Steel Structure Design

Authors: Danilo Di Donato

Abstract:

The continuous new releases of free and open source software (FOSS) and the high costs of proprietary software -whose monopoly is characterized by closed codes and the low level of implementation and customization of software by end-users- impose a reflection on possible tools that can be chosen and adopted for the design and the representation of new steel constructions. The paper aims to show experimentation carried out to verify the actual potential and the effective applicability of FOSS supports to the BIM modeling of steel structures, particularly considering the goal of a possible workflow in order to achieve high level of development (LOD); allow effective interchange methods between different software. To this end, the examined software packages are those with open source or freeware licenses, in order to evaluate their use in architectural praxis. The test has primarily involved the experimentation of Freecad -the only Open Source software that allows a complete and integrated BIM workflow- and then the results have been compared with those of two proprietary software, Sketchup and TeklaBim Sight, which are released with a free version, but not usable for commercial purposes. The experiments carried out on Open Source, and freeware software was then compared with the outcomes that are obtained by two proprietary software, Sketchup Pro and Tekla Structure which has special modules particularly addressed to the design of steel structures. This evaluation has concerned different comparative criteria, that have been defined on the basis of categories related to the reliability, the efficiency, the potentiality, achievable LOD and user-friendliness of the analyzed software packages. In order to verify the actual outcomes of FOSS BIM for the steel structure projects, these results have been compared with a simulation related to a real case study and carried out with a proprietary software BIM modeling. Therefore, the same design theme, the project of a shelter of public space, has been developed using different software. Therefore the purpose of the contribution is to assess what are the developments and potentialities inherent in FOSS BIM, in order to estimate their effective applicability to professional practice, their limits and new fields of research they propose.

Keywords: BIM, steel buildings, FOSS, LOD

Procedia PDF Downloads 172
1475 Antineoplastic Effect of Tridham and Penta Galloyl Glucose in Experimental Mammary Carcinoma Bearing Rats

Authors: Karthick Dharmalingam, Stalin Ramakrishnan, Haseena Banu Hedayathullah Khan, Sachidanandanam Thiruvaiyaru Panchanadham, Shanthi Palanivelu

Abstract:

Background: Breast cancer is arising as the most dreadful cancer affecting women worldwide. Hence, there arises a need to search and test for new drugs. Herbal formulations used in Siddha preparations are proved to be effective against various types of cancer. They also offer advantage through synergistic amplification and diminish any possible adverse effects. Tridham (TD) is a herbal formulation prepared in our laboratory consisting of Terminalia chebula, Elaeocarpus ganitrus and Prosopis cineraria in a definite ratio and has been used for the treatment of mammary carcinoma. Objective: To study the restorative effect of Tridham and penta galloyl glucose (a component of TD) on DMBA induced mammary carcinoma in female Sprague Dawley rats. Materials and Methods: Rats were divided into seven groups of six animals each. Group I (Control) received corn oil. Group II– mammary carcinoma was induced by DMBA dissolved in corn oil single dose orally. Group III and Group IV were induced with DMBA and subsequently treated with Tridham and penta galloyl glucose, respectively for 48 days. Group V was treated with DMBA and subsequently with a standard drug, cyclophosphamide. Group VI and Group VII were given Tridham and penta galloyl glucose alone, respectively for 48 days. After the experimental period, the animals were sacrificed by cervical decapitation. The mammary gland tissue was excised and levels of antioxidants were determined by biochemical assay. p53 and PCNA expression were accessed using immunohistochemistry. Nrf-2, Cox-2 and caspase-3 protein expression were studied by Western Blotting analysis. p21, Bcl-2, Bax, Bad and caspase-8 gene expression were studied by RT-PCR. Results: Histopathological studies confirmed induction of mammary carcinoma in DMBA induced rats and treatment with TD and PGG resulted in regression of tumour. The levels of enzymic and non-enzymic antioxidants were decreased in DMBA induced rats when compared to control rats. The levels of cell cycle inhibitory markers and apoptotic markers were decreased in DMBA induced rats when compared to control rats. These parameters were restored to near normal levels on treatment with Tridham and PGG. Conclusion: The results of the present study indicate the antineoplastic effect of Tridham and PGG are exerted through the modulation of antioxidant status and expression of cell cycle regulatory markers as well as apoptotic markers. Acknowledgment: Financial assistance provided in the form of ICMR-SRF by Indian Council of Medical Research (ICMR), India is gratefully acknowledged here.

Keywords: antioxidants, Mammary carcinoma, pentaGalloyl glucose, Tridham

Procedia PDF Downloads 274
1474 Challenges of Carbon Trading Schemes in Africa

Authors: Bengan Simbarashe Manwere

Abstract:

The entire African continent, comprising 55 countries, holds a 2% share of the global carbon market. The World Bank attributes the continent’s insignificant share and participation in the carbon market to the limited access to electricity. Approximately 800 million people spread across 47 African countries generate as much power as Spain, with a population of 45million. Only South Africa and North Africa have carbon-reduction investment opportunities on the continent and dominate the 2% market share of the global carbon market. On the back of the 2015 Paris Agreement, South Africa signed into law the Carbon Tax Act 15 of 2019 and the Customs and Excise Amendment Act 13 of 2019 (Gazette No. 4280) on 1 June 2019. By these laws, South Africa was ushered into the league of active global carbon market players. By increasing the cost of production by the rate of R120/tCO2e, the tax intentionally compels the internalization of pollution as a cost of production and, relatedly, stimulate investment in clean technologies. The first phase covered the 1 June 2019 – 31 December 2022 period during which the tax was meant to escalate at CPI + 2% for Scope 1 emitters. However, in the second phase, which stretches from 2023 to 2030, the tax will escalate at the inflation rate only as measured by the consumer price index (CPI). The Carbon Tax Act provides for carbon allowances as mitigation strategies to limit agents’ carbon tax liability by up to 95% for fugitive and process emissions. Although the June 2019 Carbon Tax Act explicitly makes provision for a carbon trading scheme (CTS), the carbon trading regulations thereof were only finalised in December 2020. This points to a delay in the establishment of a carbon trading scheme (CTS). Relatedly, emitters in South Africa are not able to benefit from the 95% reduction in effective carbon tax rate from R120/tCO2e to R6/tCO2e as the Johannesburg Stock Exchange (JSE) has not yet finalized the establishment of the market for trading carbon credits. Whereas most carbon trading schemes have been designed and constructed from the beginning as new tailor-made systems in countries the likes of France, Australia, Romania which treat carbon as a financial product, South Africa intends, on the contrary, to leverage existing trading infrastructure of the Johannesburg Stock Exchange (JSE) and the Clearing and Settlement platforms of Strate, among others, in the interest of the Paris Agreement timelines. Therefore the carbon trading scheme will not be constructed from scratch. At the same time, carbon will be treated as a commodity in order to align with the existing institutional and infrastructural capacity. This explains why the Carbon Tax Act is silent about the involvement of the Financial Sector Conduct Authority (FSCA).For South Africa, there is need to establish they equilibrium stability of the CTS. This is important as South Africa is an innovator in carbon trading and the successful trading of carbon credits on the JSE will lead to imitation by early adopters first, followed by the middle majority thereafter.

Keywords: carbon trading scheme (CTS), Johannesburg stock exchange (JSE), carbon tax act 15 of 2019, South Africa

Procedia PDF Downloads 62
1473 Environmental Radioactivity Analysis by a Sequential Approach

Authors: G. Medkour Ishak-Boushaki, A. Taibi, M. Allab

Abstract:

Quantitative environmental radioactivity measurements are needed to determine the level of exposure of a population to ionizing radiations and for the assessment of the associated risks. Gamma spectrometry remains a very powerful tool for the analysis of radionuclides present in an environmental sample but the basic problem in such measurements is the low rate of detected events. Using large environmental samples could help to get around this difficulty but, unfortunately, new issues are raised by gamma rays attenuation and self-absorption. Recently, a new method has been suggested, to detect and identify without quantification, in a short time, a gamma ray of a low count source. This method does not require, as usually adopted in gamma spectrometry measurements, a pulse height spectrum acquisition. It is based on a chronological record of each detected photon by simultaneous measurements of its energy ε and its arrival time τ on the detector, the pair parameters [ε,τ] defining an event mode sequence (EMS). The EMS serials are analyzed sequentially by a Bayesian approach to detect the presence of a given radioactive source. The main object of the present work is to test the applicability of this sequential approach in radioactive environmental materials detection. Moreover, for an appropriate health oversight of the public and of the concerned workers, the analysis has been extended to get a reliable quantification of the radionuclides present in environmental samples. For illustration, we consider as an example, the problem of detection and quantification of 238U. Monte Carlo simulated experience is carried out consisting in the detection, by a Ge(Hp) semiconductor junction, of gamma rays of 63 keV emitted by 234Th (progeny of 238U). The generated EMS serials are analyzed by a Bayesian inference. The application of the sequential Bayesian approach, in environmental radioactivity analysis, offers the possibility of reducing the measurements time without requiring large environmental samples and consequently avoids the attached inconvenient. The work is still in progress.

Keywords: Bayesian approach, event mode sequence, gamma spectrometry, Monte Carlo method

Procedia PDF Downloads 493
1472 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study

Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi

Abstract:

Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.

Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine

Procedia PDF Downloads 111
1471 Real-world Characterization of Treatment Intensified (Add-on to Metformin) Adults with Type 2 Diabetes in Pakistan: A Multi-center Retrospective Study (Converge)

Authors: Muhammad Qamar Masood, Syed Abbas Raza, Umar Yousaf Raja, Imran Hassan, Bilal Afzal, Muhammad Aleem Zahir, Atika Shaheer

Abstract:

Background: Cardiovascular disease (CVD) is a major burden among people with type 2 diabetes (T2D) with 1 in 3 reported to have CVD. Therefore, understanding real-world clinical characteristics and prescribing patterns could help in better care. Objective: The CONVERGE (Cardiovascular Outcomes and Value in the Real world with GLP-1RAs) study characterized demographics and medication usage patterns in T2D intensified (add-on to metformin) overall population. The data were further divided into subgroups {dipeptidyl peptidase-4 inhibitors (DPP-4is), sulfonylureas (SUs), insulins, glucagon-like peptide-1 receptor agonists (GLP-1 RAs) and sodium-glucose cotransporter-2 inhibitors (SGLT-2is)}, according to the latest prescribed antidiabetic agent (ADA) in India/Pakistan/Thailand. Here, we report findings from Pakistan. Methods: A multi-center retrospective study utilized data from medical records between 13-Sep-2008 (post-market approval of GLP-1RAs) and 31-Dec-2017 in adults (≥18-year-old). The data for this study were collected from 05 centers / institutes located in major cities of Pakistan, including Karachi, Lahore, Islamabad, and Multan. These centers included National Hospital, Aga Khan University Hospital, Diabetes Endocrine Clinic Lahore, Shifa International Hospital, Mukhtar A Sheikh Hospital Multan. Data were collected at start of medical record and at 6 or 12-months prior to baseline based on variable type; analyzed descriptively. Results: Overall, 1,010 patients were eligible. At baseline, overall mean age (SD) was 51.6 (11.3) years, T2D duration was 2.4 (2.6) years, HbA1c was 8.3% (1.9) and 35% received ≥1CVD medications in the past 1-year (before baseline). Most frequently prescribed ADAs post-metformin were DPP-4is and SUs (~63%). Only 6.5% received GLP-1RAs and SGLT-2is were not available in Pakistan during the study period. Overall, it took a mean of 4.4 years and 5 years to initiate GLP-1RAs and SGLT-2is, respectively. In comparison to other subgroups, more patients from GLP-1RAs received ≥3 types of ADA (58%), ≥1 CVD medication (64%) and had higher body mass index (37kg/m2). Conclusions: Utilization of GLP-1RAs and SGLT-2is was low, took longer time to initiate and not before trying multiple ADAs. This may be due to lack of evidence for CV benefits for these agents during the study period. The planned phase 2 of the CONVERGE study can provide more insights into utilization and barriers to prescribe GLP-1RAs and SGLT-2is post 2018 in Pakistan.

Keywords: type 2 diabetes, GLP-1RA, treatment intensification, cardiovascular disease

Procedia PDF Downloads 58
1470 Evaluation of Traffic Noise Level: A Case Study in Residential Area of Ishbiliyah , Kuwait

Authors: Jamal Almatawah, Hamad Matar, Abdulsalam Altemeemi

Abstract:

The World Health Organization (WHO) has recognized environmental noise as harmful pollution that causes adverse psychosocial and physiologic effects on human health. The motor vehicle is considered to be one of the main source of noise pollution. It is a universal phenomenon, and it has grown to the point that it has become a major concern for both the public and policymakers. The aim of this paper, therefore, is to investigate the Traffic noise levels and the contributing factors that affect its level, such as traffic volume, heavy-vehicle Speed and other metrological factors in Ishbiliyah as a sample of a residential area in Kuwait. Three types of roads were selected in Ishbiliyah expressway, major arterial and collector street. The other source of noise that interferes the traffic noise has also been considered in this study. Traffic noise level is measured and analyzed using the Bruel & Kjaer outdoor sound level meter 2250-L (2250 Light). The Count-Cam2 Video Camera has been used to collect the peak and off-peak traffic count. Ambient Weather WM-5 Handheld Weather Station is used for metrological factors such as temperature, humidity and wind speed. Also, the spot speed was obtained using the radar speed: Decatur Genesis model GHD-KPH. All the measurement has been detected at the same time (simultaneously). The results showed that the traffic noise level is over the allowable limit on all types of roads. The average equivalent noise level (LAeq) for the Expressway, Major arterial and Collector Street was 74.3 dB(A), 70.47 dB(A) and 60.84 dB(A), respectively. In addition, a Positive Correlation coefficient between the traffic noise versus traffic volume and between traffic noise versus 85th percentile speed was obtained. However, there was no significant relation and Metrological factors. Abnormal vehicle noise due to poor maintenance or user-enhanced exhaust noise was found to be one of the highest factors that affected the overall traffic noise reading.

Keywords: traffic noise, residential area, pollution, vehicle noise

Procedia PDF Downloads 58
1469 Polycode Texts in Communication of Antisocial Groups: Functional and Pragmatic Aspects

Authors: Ivan Potapov

Abstract:

Background: The aim of this paper is to investigate poly code texts in the communication of youth antisocial groups. Nowadays, the notion of a text has numerous interpretations. Besides all the approaches to defining a text, we must take into account semiotic and cultural-semiotic ones. Rapidly developing IT, world globalization, and new ways of coding of information increase the role of the cultural-semiotic approach. However, the development of computer technologies leads also to changes in the text itself. Polycode texts play a more and more important role in the everyday communication of the younger generation. Therefore, the research of functional and pragmatic aspects of both verbal and non-verbal content is actually quite important. Methods and Material: For this survey, we applied the combination of four methods of text investigation: not only intention and content analysis but also semantic and syntactic analysis. Using these methods provided us with information on general text properties, the content of transmitted messages, and each communicants’ intentions. Besides, during our research, we figured out the social background; therefore, we could distinguish intertextual connections between certain types of polycode texts. As the sources of the research material, we used 20 public channels in the popular messenger Telegram and data extracted from smartphones, which belonged to arrested members of antisocial groups. Findings: This investigation let us assert that polycode texts can be characterized as highly intertextual language unit. Moreover, we could outline the classification of these texts based on communicants’ intentions. The most common types of antisocial polycode texts are a call to illegal actions and agitation. What is more, each type has its own semantic core: it depends on the sphere of communication. However, syntactic structure is universal for most of the polycode texts. Conclusion: Polycode texts play important role in online communication. The results of this investigation demonstrate that in some social groups using these texts has a destructive influence on the younger generation and obviously needs further researches.

Keywords: text, polycode text, internet linguistics, text analysis, context, semiotics, sociolinguistics

Procedia PDF Downloads 130
1468 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 38
1467 The Impact of Cryptocurrency Classification on Money Laundering: Analyzing the Preferences of Criminals for Stable Coins, Utility Coins, and Privacy Tokens

Authors: Mohamed Saad, Huda Ismail

Abstract:

The purpose of this research is to examine the impact of cryptocurrency classification on money laundering crimes and to analyze how the preferences of criminals differ according to the type of digital currency used. Specifically, we aim to explore the roles of stablecoins, utility coins, and privacy tokens in facilitating or hindering money laundering activities and to identify the key factors that influence the choices of criminals in using these cryptocurrencies. To achieve our research objectives, we used a dataset for the most highly traded cryptocurrencies (32 currencies) that were published on the coin market cap for 2022. In addition to conducting a comprehensive review of the existing literature on cryptocurrency and money laundering, with a focus on stablecoins, utility coins, and privacy tokens, Furthermore, we conducted several Multivariate analyses. Our study reveals that the classification of cryptocurrency plays a significant role in money laundering activities, as criminals tend to prefer certain types of digital currencies over others, depending on their specific needs and goals. Specifically, we found that stablecoins are more commonly used in money laundering due to their relatively stable value and low volatility, which makes them less risky to hold and transfer. Utility coins, on the other hand, are less frequently used in money laundering due to their lack of anonymity and limited liquidity. Finally, privacy tokens, such as Monero and Zcash, are increasingly becoming a preferred choice among criminals due to their high degree of privacy and untraceability. In summary, our study highlights the importance of understanding the nuances of cryptocurrency classification in the context of money laundering and provides insights into the preferences of criminals in using digital currencies for illegal activities. Based on our findings, our recommendation to the policymakers is to address the potential misuse of cryptocurrencies for money laundering. By implementing measures to regulate stable coins, strengthening cross-border cooperation, fostering public-private partnerships, and increasing cooperation, policymakers can help prevent and detect money laundering activities involving digital currencies.

Keywords: crime, cryptocurrency, money laundering, tokens.

Procedia PDF Downloads 84
1466 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 119
1465 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model

Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA

Procedia PDF Downloads 69