Search results for: development processes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20242

Search results for: development processes

982 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka

Authors: Champika Amarasinghe

Abstract:

Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.

Keywords: working environment, Covid 19, occupational diseases, occupational accidents

Procedia PDF Downloads 86
981 Emergence of Neurodiversity and Awareness of Autism Among School Teachers- A Preliminary Survey

Authors: Tanvi Rajesh Sanghavi

Abstract:

Introduction: Neurodiversity is a concept which captures the different ways in which everyone's brain functions and is considered as part of normal variation. It is a strength-based approach which focuses on the individual's strengths and capabilities and believes in providing support wherever necessary. In many parts of the world, those diagnosed with autism spectrum disorder have been ostracized and ridiculed due to their sensory and communication differences. Hence, it becomes important for the teachers to have knowledge about autism and understand the needs of children with Autism. Need: India is rich in terms of culture, languages and religious diversity. It is important to study neurodiversity in such a population for better understanding of neurodiverse individuals and appropriate intervention. Aim & objectives: This study seeks teachers' knowledge of the causes, traits and educational requirements of children with autism spectrum disorder (ASD). It also aims to find out whether mainstream schools actually provide training programs to the teachers to manage such children along with the necessary accommodations. Method: The current study was a cross-sectional study conducted among school teachers. A total of 30 school teachers were taken for the study. The participants were enrolled after informed consent. The participants were directed to a google form consisting of objective questions. The first part of the questionnaire elicited information about school, teaching experience, qualification, etc. There were specific questions extracting details on attending/conducting sensitization and professional programs in regard to care for autistic children. The second part of the questionnaire consisted of some basic questions on the teacher’s understanding of diagnosis, traits, causes, road to recovery and understanding the educational and communication needs of autistic children from the teacher’s perspective. The responses were tabulated and analyzed descriptively. Results: Most of the teachers had 5–10 years of teaching experience. The majority of the teachers used the term “special child” for autistic children. Around 54.8% (17 teachers) of the total teachers felt that the parents of autistic children should teach their child to learn adaptive skills and 41.9% of the teachers felt that they should take medical intervention. About 50% of the teachers felt that the cause of autism is related to pre-natal maternal factors and about 40% felt that its cause is genetic. Only a small percentage of teachers felt that they were trained to manage the children with autism. More than 50% of the teachers mentioned that their schools do not conduct training programs for managing these children. Discussion & Conclusion: In this study, the knowledge and perspectives of teachers on children with ASD were studied. The most widely held contemporary belief is that genetic factors play a major part in the development of ASD, although the existing evidence is muddled, with numerous opposing perspectives on the nature of this mechanism. It is worth noting that any culture's level of humanity is mirrored in how that society "treats" its vulnerable population.

Keywords: autism, neurodiversity, awareness, education

Procedia PDF Downloads 14
980 The Destruction of Memory: Ataturk Cultural Centre

Authors: Birge Yildirim Okta

Abstract:

This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018, and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. Focusing on the timeline starting from early republican period till today, the paper uses narrative discourse analysis to research Atatürk Cultural Center as a place of memory and a space of politics in its existence. After the establishment of Turkish Republic, one of most important implementation in Taksim Square, reflecting the internationalist style, was the construction of Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the republic, reflecting the western, modern cultural heritage by professional groups, artists and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period, AKM was a representation of a cultural production of a modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from Taksim scene under the rule of the conservative government, Justice and Development Party and the construction of Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The article aims to discuss the existence and demolishment of Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and a space of politics.

Keywords: space of politics, place of memory, atatürk cultural center, taksim square

Procedia PDF Downloads 82
979 Evaluating Urban City Indices: A Study for Investigating Functional Domains, Indicators and Integration Methods

Authors: Fatih Gundogan, Fatih Kafali, Abdullah Karadag, Alper Baloglu, Ersoy Pehlivan, Mustafa Eruyar, Osman Bayram, Orhan Karademiroglu, Wasim Shoman

Abstract:

Nowadays many cities around the world are investing their efforts and resources for the purpose of facilitating their citizen’s life and making cities more livable and sustainable by implementing newly emerged phenomena of smart city. For this purpose, related research institutions prepare and publish smart city indices or benchmarking reports aiming to measure the city’s current ‘smartness’ status. Several functional domains, various indicators along different selection and calculation methods are found within such indices and reports. The selection criteria varied for each institution resulting in inconsistency in the ranking and evaluating. This research aims to evaluate the impact of selecting such functional domains, indicators and calculation methods which may cause change in the rank. For that, six functional domains, i.e. Environment, Mobility, Economy, People, Living and governance, were selected covering 19 focus areas and 41 sub-focus (variable) areas. 60 out of 191 indicators were also selected according to several criteria. These were identified as a result of extensive literature review for 13 well known global indices and research and the ISO 37120 standards of sustainable development of communities. The values of the identified indicators were obtained from reliable sources for ten cities. The values of each indicator for the selected cities were normalized and standardized to objectively investigate the impact of the chosen indicators. Moreover, the effect of choosing an integration method to represent the values of indicators for each city is investigated by comparing the results of two of the most used methods i.e. geometric aggregation and fuzzy logic. The essence of these methods is assigning a weight to each indicator its relative significance. However, both methods resulted in different weights for the same indicator. As a result of this study, the alternation in city ranking resulting from each method was investigated and discussed separately. Generally, each method illustrated different ranking for the selected cities. However, it was observed that within certain functional areas the rank remained unchanged in both integration method. Based on the results of the study, it is recommended utilizing a common platform and method to objectively evaluate cities around the world. The common method should provide policymakers proper tools to evaluate their decisions and investments relative to other cities. Moreover, for smart cities indices, at least 481 different indicators were found, which is an immense number of indicators to be considered, especially for a smart city index. Further works should be devoted to finding mutual indicators representing the index purpose globally and objectively.

Keywords: functional domain, urban city index, indicator, smart city

Procedia PDF Downloads 147
978 A Case Study Demonstrating the Benefits of Low-Carb Eating in an Adult with Latent Autoimmune Diabetes Highlights the Necessity and Effectiveness of These Dietary Therapies

Authors: Jasmeet Kaur, Anup Singh, Shashikant Iyengar, Arun Kumar, Ira Sahay

Abstract:

Latent autoimmune diabetes in adults (LADA) is an irreversible autoimmune disease that affects insulin production. LADA is characterized by the production of Glutamic acid decarboxylase (GAD) antibodies, which is similar to type 1 diabetes. Individuals with LADA may eventually develop overt diabetes and require insulin. In this condition, the pancreas produces little or no insulin, which is a hormone used by the body to allow glucose to enter cells and produce energy. While type 1 diabetes was traditionally associated with children and teenagers, its prevalence has increased in adults as well. LADA is frequently misdiagnosed as type 2 diabetes, especially in adulthood when type 2 diabetes is more common. LADA develops in adulthood, usually after age 30. Managing LADA involves metabolic control with exogenous insulin and prolonging the life of surviving beta cells, thereby slowing the disease's progression. This case study examines the impact of approximately 3 months of low-carbohydrate dietary intervention in a 42-year-old woman with LADA who was initially misdiagnosed as having type 2 diabetes. Her c-peptide was 0.13 and her HbA1c was 9.3% when this trial began. Low-carbohydrate interventions have been shown to improve blood sugar levels, including fasting, post-meal, and random blood sugar levels, as well as haemoglobin levels, blood pressure, energy levels, sleep quality, and satiety levels. The use of low-carbohydrate dietary intervention significantly reduces both hypo- and hyperglycaemia events. During the 3 months of the study, there were 2 to 3 hyperglycaemic events owing to physical stress and a single hypoglycaemic event. Low-carbohydrate dietary therapies lessen insulin dose inaccuracy, which explains why there were fewer hyperglycaemic and hypoglycaemic events. In three months, the glycated haemoglobin (HbA1c) level was reduced from 9.3% to 6.3%. These improvements occur without the need for caloric restriction or physical activity. Stress management was crucial aspect of the treatment plan as stress-induced neuroendocrine hormones can cause immunological dysregulation. Additionally, supplements that support immune system and reduce inflammation were used as part of the treatment during the trial. Long-term studies are needed to track disease development and corroborate the claim that such dietary treatments can prolong the honeymoon phase in LADA. Various factors can contribute to additional autoimmune attacks, so measuring c-peptide is crucial on a regular basis to determine whether insulin levels need to be adjusted.

Keywords: autoimmune, diabetes, LADA, low_carb, nutrition

Procedia PDF Downloads 37
977 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 241
976 Origins: An Interpretive History of MMA Design Studio’s Exhibition for the 2023 Venice Biennale

Authors: Jonathan A. Noble

Abstract:

‘Origins’ is an exhibition designed and installed by MMA Design Studio, at the 2023 Venice Biennale. The instillation formed part of the ‘Dangerous Liaisons’ group exhibition at the Arsenale building. An immersive experience was created for those who visited, where video projection and the bodies of visitors interacted with the scene. Designed by South African architect, Mphethi Morojele – founder and owner of MMA – the primary inspiration for ‘Origins’ was the recent discovery by Professor Karim Sadr in 2019, of a substantial Tswana settlement. Situated in present day Suikerbosrand Nature Reserve, some 45km south of Johannesburg, this precolonial city named Kweneng, has been dated back to the fifteenth century. This remarkable discovery was achieved thanks to advanced aerial, LiDAR scanning technology, which was used to capture the traces of Kweneng, spanning a terrain of some 10km long and 2km wide. Discovered by light (LiDAR) and exhibited through light, Origins presents a simulated experience of Kweneng. The presentation of Kweneng was achieved primarily though video, with a circular projection onto the floor of an animated LiDAR data sequence, and onto the walls a filmed dance sequence choreographed to embody the architectural, spatial and symbolic significance of Kweneng. This paper documents the design process that was involved in the conceptualization, development and final realization of this noteworthy exhibition, with an elucidation upon key social and cultural questions pertaining to precolonial heritage, reimagined histories and postcolonial identity. Periods of change and of social awakening sometimes spark an interest in questions of origin, of cultural lineage and belonging – and which certainly is the case for contemporary, post-Apartheid South Africa. Researching this paper has required primary study of MMA Design Studio’s project archive, including various proposals and other design related documents, conceptual design sketches, architectural drawings and photographs. This material is supported by the authors first-hand interviews with Morejele and others who were involved, especially with respect to the choreography of the interpretive dance, LiDAR visualization techniques and video production that informed the simulated, immersive experience at the exhibition. Presenting a ‘dangerous liaison’ between architecture and dance, Origins looks into the distant past to frame contemporary questions pertaining to intangible heritage, animism and embodiment through architecture and dance – considerations which are required “to survive the future”, says Morojele.

Keywords: architecture and dance, Kweneng, MMA design studio, origins, Venice Biennale

Procedia PDF Downloads 86
975 Analysis of Differentially Expressed Genes in Spontaneously Occurring Canine Melanoma

Authors: Simona Perga, Chiara Beltramo, Floriana Fruscione, Isabella Martini, Federica Cavallo, Federica Riccardo, Paolo Buracco, Selina Iussich, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari, Paola Modesto

Abstract:

Introduction: Human and canine melanoma have common clinical, histologic characteristics making dogs a good model for comparative oncology. The identification of specific genes and a better understanding of the genetic landscape, signaling pathways, and tumor–microenvironmental interactions involved in the cancer onset and progression is essential for the development of therapeutic strategies against this tumor in both species. In the present study, the differential expression of genes in spontaneously occurring canine melanoma and in paired normal tissue was investigated by targeted RNAseq. Material and Methods: Total RNA was extracted from 17 canine malignant melanoma (CMM) samples and from five paired normal tissues stored in RNA-later. In order to capture the greater genetic variability, gene expression analysis was carried out using two panels (Qiagen): Human Immuno-Oncology (HIO) and Mouse-Immuno-Oncology (MIO) and the miSeq platform (Illumina). These kits allow the detection of the expression profile of 990 genes involved in the immune response against tumors in humans and mice. The data were analyzed through the CLCbio Genomics Workbench (Qiagen) software using the Canis lupus familiaris genome as a reference. Data analysis were carried out both comparing the biologic group (tumoral vs. healthy tissues) and comparing neoplastic tissue vs. paired healthy tissue; a Fold Change greater than two and a p-value less than 0.05 were set as the threshold to select interesting genes. Results and Discussion: Using HIO 63, down-regulated genes were detected; 13 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Eighteen genes were up-regulated, 14 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Using the MIO, 35 down regulated-genes were detected; only four of these were down-regulated, also comparing neoplastic sample vs. paired healthy tissue. Twelve genes were up-regulated in both types of analysis. Considering the two kits, the greatest variation in Fold Change was in up-regulated genes. Dogs displayed a greater genetic homology with humans than mice; moreover, the results have shown that the two kits are able to detect different genes. Most of these genes have specific cellular functions or belong to some enzymatic categories; some have already been described to be correlated to human melanoma and confirm the validity of the dog as a model for the study of molecular aspects of human melanoma.

Keywords: animal model, canine melanoma, gene expression, spontaneous tumors, targeted RNAseq

Procedia PDF Downloads 197
974 Assessment of the Growth Enhancement Support Scheme in Adamawa State, Nigeria

Authors: Oto J. Okwu, Ornan Henry, Victor A. Otene

Abstract:

The agricultural sector contributes a great deal to the sustenance of Nigeria’s food security and economy, with an attendant impact on rural development. In spite of the relatively high number of farmers in the country, self-sufficiency in food production is still a challenge. Farmers are faced with myriad problems which hinder their production efficiency, one of which is their access to agricultural inputs required for optimum production. To meet the challenges faced by farmers, the government at the federal level has come up with many agricultural policies, one of which is the Agricultural Transformation Agenda (ATA). The Growth Enhancement Support Scheme (GESS) is one of the critical components of ATA, which is aimed at ensuring the effective distribution of agricultural inputs delivered directly to farmers, and at a regulated cost. After about 8 years of launching this policy, it will be necessary to carry out an assessment of GESS and determine the impact it has made on rural farmers with respect to their access to farm inputs. This study was carried out to assess the Growth Enhancement Support Scheme (GESS) in Adamawa State, Nigeria. Crop farmers who registered under the GESS in Adamawa State, Nigeria, formed the population for the study. Primary data for the study were obtained through a survey, and the use of a structured questionnaire. A sample size of 167 respondents was selected using multi-stage, purposive, and random sampling techniques. The validity and reliability of the research instrument (questionnaire) were obtained through pilot testing and test-retest method, respectively. The objectives of the study were to determine the difference in the level of access to agricultural inputs before and after GESS, determine the difference in cost of agricultural inputs before and after GESS, and to determine the challenges faced by rural farmers in accessing agricultural inputs through GESS. Both descriptive and inferential statistics were used in analyzing the collected data. Specifically, Mann-Whitney, student t-test, and factor analysis were used to test the stated hypotheses. Research findings revealed there was a significant difference in the level of access to farm inputs after the introduction of GESS (Z=14.216). Also, there was a significant difference in the cost of agro-inputs after the introduction of GESS (Pr |T| > |t|= 0.0000). The challenges faced by respondents in accessing agro-inputs through GESS were administrative and technical in nature. Based on the findings of the research, it was recommended that efforts be made by the government to sustain the GESS, as it has significantly improved the level of farmers’ access to agricultural inputs and has reduced the cost of agro-inputs, while administrative challenges faced by the respondents in accessing inputs be addressed by the government, and extension agents assist the farmers to overcome the technical challenges they face in accessing inputs.

Keywords: agricultural policy, agro-inputs, assessment, growth enhancement support scheme, rural farmers

Procedia PDF Downloads 108
973 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment

Authors: Ujjwall Sai Sunder Uppuluri

Abstract:

Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.

Keywords: complex systems, evolutionary theory, group theory, international political economy

Procedia PDF Downloads 135
972 Effect of High Dose of Black Tea Extract on Physiological Parameters of Mother and Pups in Experimental Albino Rats

Authors: Avijit Dey, Antony Gomes, Subir Chandra Dasgupta

Abstract:

Tea (Camellia sinensis) is the most popular beverages in the world and is ranked second after the water. Tea has been considered as a health promoting beverage since ancient times due to its health-promoting activity. Recently, immunomodulatory, anti-arthritic, antioxidant, anticancer and cardioprotective activity of tea has been established. Very few studies have demonstrated the effect of high dose of black tea on health. The aim of the present study was to evaluate the role of low & high dose of Black Tea Extract (BTE) on the different physiological parameters of mother and pups during prenatal and postnatal developmental period in the experimental rodent. BTE was orally administered in LD (50mg BTE/kg/day) and HD (100mg BTE/kg/day) except control groups of rats (n=6/group) throughout the prenatal (day 0-21) and postnatal (day 21-42) periods. During prenatal period (0, 7th, 14th, 20th days) body weight, urinary calcium, magnesium, urea and creatinine was measured. In postnatal period physical (0, 10th, 21th days) parameters of pups like body weight, cranial length, cranial diameter, neck width, tail length, craniosacral length of pups were analyzed. Liver and lungs from pups and kidney spleen, etc. from mothers were collected on day 42 for histopathological studies. The comparative urine strip and morphology of RBC was also analyzed by SEM from mothers of different groups on day 42. The level of cytokines like IL-1alpha, IL-1beta, IL-6, IL-10, TNF-alpha were analysed by enzyme-linked immunosorbent assay (ELISA) on day 0, day 20 and day 42. The body weight of LD and HD mothers were also significantly (P<0.05) less than control mothers at 20th day of pregnancy and there was also significant changes in urinary calcium, urea, creatinine. The bio morphometric analysis of pups showed significant alteration (P<0.05) in HD groups relative to control. Some histological alterations were also observed in pups and mothers. Comparative urine strip analysis and morphology of RBC showed significant changes in treated groups. LD and HD treated mothers showed an increase in proinflammatory cytokines like IL-1beta, TNF-alpha and decrease in anti-inflammatory cytokine-like IL-10 on day 20 compared to PC mothers. This study clearly indicated that high dose of BTE possesses detrimental effect on pregnant mother and the pup. Further studies are in progress to elucidate the molecular mechanism of actions. This project work has been sponsored by National Tea Research Foundation vide Project Sanction No.: 17 (305)/2013/4423 dated 11th March, 2014. All experimental protocols described in the study were approved by animal ethics committee.

Keywords: black tea extract, pregnancy, prenatal and postnatal development, inflammation

Procedia PDF Downloads 271
971 An Integrated Framework for Wind-Wave Study in Lakes

Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung

Abstract:

The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.

Keywords: wave modelling, wind-wave, extreme value analysis, marina

Procedia PDF Downloads 81
970 Water Infrastructure Asset Management: A Comparative Analysis of Three Urban Water Utilities in South Africa

Authors: Elkington S. Mnguni

Abstract:

Water and sanitation services in South Africa are characterized by both achievements and challenges. After the end of apartheid in 1994 the newly elected government faced the challenge of eradicating backlogs with respect to access to basic services, including water and sanitation. Capital investment made in the development of new water and sanitation infrastructure to provide basic services to previously disadvantaged communities has grown, to a certain extent, at the expense of investment in the operation and maintenance of new and existing infrastructure. Challenges resulting from aging infrastructure and poor plant performance highlight the need for investing in the maintenance, rehabilitation, and replacement of existing infrastructure to optimize the return on investment. Advanced water infrastructure asset management (IAM) is key to achieving adequate levels of service, particularly with regard to reliable and high quality drinking water supply, prevention of urban flooding, efficient use of natural resources and prevention of pollution and associated risks. Against this backdrop, this paper presents an appraisal of water and sanitation IAM systems in South Africa’s three utilities, being metropolitan cities in the Gauteng Province. About a quarter of the national population lives in the three rapidly urbanizing cities of Johannesburg, Ekurhuleni and Tshwane, located in a semi-arid region. A literature review has been done and field visits to some of the utility facilities are being conducted. Semi-structured interviews will be conducted with the three utilities. The following critical factors are being analysed in terms of compliance with the national Water Services IAM Strategy (2011) and other applicable legislation: asset registers; capacity of assets; current and predicted demand; funding availability / budget allocations; plans: operation & maintenance, renewal & replacement, and risk management; no-drop status (non-revenue water levels); blue drop status (water quality); green drop status (effluent quality); and skills availability. Some of the key challenges identified in the literature review include: funding constraints, Skills shortage, and wastewater treatment plants operating beyond their design capacities. These challenges will be verified during field visits and research interviews. Gaps between literature and practice will be identified and relevant recommendations made if necessary. The objective of this study is to contribute to the resolution of the challenges brought about by the backlogs in the operation and maintenance of water and sanitation assets in the country in general, and in the three cities in particular, thus improving the sustainability thereof.

Keywords: asset management, backlogs, levels of service, sustainability, water and sanitation infrastructure

Procedia PDF Downloads 227
969 The Use of Social Stories and Digital Technology as Interventions for Autistic Children; A State-Of-The-Art Review and Qualitative Data Analysis

Authors: S. Hussain, C. Grieco, M. Brosnan

Abstract:

Background and Aims: Autism is a complex neurobehavioural disorder, characterised by impairments in the development of language and communication skills. The study involved a state-of-art systematic review, in addition to qualitative data analysis, to establish the evidence for social stories as an intervention strategy for autistic children. An up-to-date review of the use of digital technologies in the delivery of interventions to autistic children was also carried out; to propose the efficacy of digital technologies and the use of social stories to improve intervention outcomes for autistic children. Methods: Two student researchers reviewed a range of randomised control trials and observational studies. The aim of the review was to establish if there was adequate evidence to justify recommending social stories to autistic patients. Students devised their own search strategies to be used across a range of search engines, including Ovid-Medline, Google Scholar and PubMed. Students then critically appraised the generated literature. Additionally, qualitative data obtained from a comprehensive online questionnaire on social stories was also thematically analysed. The thematic analysis was carried out independently by each researcher, using a ‘bottom-up’ approach, meaning contributors read and analysed responses to questions and devised semantic themes from reading the responses to a given question. The researchers then placed each response into a semantic theme or sub-theme. The students then joined to discuss the merging of their theme headings. The Inter-rater reliability (IRR) was calculated before and after theme headings were merged, giving IRR for pre- and post-discussion. Lastly, the thematic analysis was assessed by a third researcher, who is a professor of psychology and the director for the ‘Centre for Applied Autism Research’ at the University of Bath. Results: A review of the literature, as well as thematic analysis of qualitative data found supporting evidence for social story use. The thematic analysis uncovered some interesting themes from the questionnaire responses, relating to the reasons why social stories were used and the factors influencing their effectiveness in each case. However, overall, the evidence for digital technologies interventions was limited, and the literature could not prove a causal link between better intervention outcomes for autistic children and the use of technologies. However, they did offer valid proposed theories for the suitability of digital technologies for autistic children. Conclusions: Overall, the review concluded that there was adequate evidence to justify advising the use of social stories with autistic children. The role of digital technologies is clearly a fast-emerging field and appears to be a promising method of intervention for autistic children; however, it should not yet be considered an evidence-based approach. The students, using this research, developed ideas on social story interventions which aim to help autistic children.

Keywords: autistic children, digital technologies, intervention, social stories

Procedia PDF Downloads 119
968 Survival Analysis after a First Ischaemic Stroke Event: A Case-Control Study in the Adult Population of England.

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Stroke is associated with a significant risk of morbidity and mortality. There is scarcity of research on the long-term survival after first-ever ischaemic stroke (IS) events in England with regards to effects of different medical therapies and comorbidities. The objective of this study was to model the all-cause mortality after an IS diagnosis in the adult population of England. Using a retrospective case-control design, we extracted the electronic medical records of patients born prior to or in year 1960 in England with a first-ever ischaemic stroke diagnosis from January 1986 to January 2017 within the Health and Improvement Network (THIN) database. Participants with a history of ischaemic stroke were matched to 3 controls by sex and age at diagnosis and general practice. The primary outcome was the all-cause mortality. The hazards of the all-cause mortality were estimated using a Weibull-Cox survival model which included both scale and shape effects and a shared random effect of general practice. The model included sex, birth cohort, socio-economic status, comorbidities and medical therapies. 20,250 patients with a history of IS (cases) and 55,519 controls were followed up to 30 years. From 2008 to 2015, the one-year all-cause mortality for the IS patients declined with an absolute change of -0.5%. Preventive treatments to cases increased considerably over time. These included prescriptions of statins and antihypertensives. However, prescriptions for antiplatelet drugs decreased in the routine general practice since 2010. The survival model revealed a survival benefit of antiplatelet treatment to stroke survivors with hazard ratio (HR) of 0.92 (0.90 – 0.94). IS diagnosis had significant interactions with gender and age at entry and hypertension diagnosis. IS diagnosis was associated with high risk of all-cause mortality with HR= 3.39 (3.05-3.72) for cases compared to controls. Hypertension was associated with poor survival with HR = 4.79 (4.49 - 5.09) for hypertensive cases relative to non-hypertensive controls, though the detrimental effect of hypertension has not reached significance for hypertensive controls, HR = 1.19(0.82-1.56). This study of English primary care data showed that between 2008 and 2015, the rates of prescriptions of stroke preventive treatments increased, and a short-term all-cause mortality after IS stroke declined. However, stroke resulted in poor long-term survival. Hypertension, a modifiable risk factor, was found to be associated with poor survival outcomes in IS patients. Antiplatelet drugs were found to be protective to survival. Better efforts are required to reduce the burden of stroke through health service development and primary prevention.

Keywords: general practice, hazard ratio, health improvement network (THIN), ischaemic stroke, multiple imputation, Weibull-Cox model.

Procedia PDF Downloads 184
967 Reflective Portfolio to Bridge the Gap in Clinical Training

Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab

Abstract:

Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.

Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education

Procedia PDF Downloads 84
966 Impact of an Educational Intervention on Knowledge, Attitude and Practices of Community Members on Schistosomiasis in Nelson Mandela Bay

Authors: Prince S. Campbell, Janine B. Adams, Melusi Thwala, Opeoluwa Oyedele, Paula E. Melariri

Abstract:

Schistosomiasis, often known as bilharzia, is a parasitic water-borne disease caused by trematode flatworms of the genus Schistosoma. Schistosomiasis infection and prevention have been found to be influenced by a range of socio-cultural risk factors, including human characteristics (e.g., gender, age, education, knowledge, attitude, and practices), as well as environmental and economic elements. Lack of awareness of the disease may also contribute to an individual's tendency to participate in behaviours or activities that heighten their susceptibility to infection. The current study assessed the community knowledge, attitude and practices (KAP) on schistosomiasis and implemented an educational intervention following pre-test interviews. A cross-sectional quasi-experimental research design was used in this quantitative study. Pre- and post-intervention interview format surveys were conducted using a structured questionnaire, targeting individuals aged 18–65 years residing within 5 km of select water bodies. The questionnaire contained 54 close-ended questions about schistosomiasis causes, transmission, and clinical symptoms and the participants were interviewed face-to-face in their homes. Data was captured on Question Pro and analyzed using Microsoft Office Excel 365 (2019) and R (version 4.3.1) software. Overall, 380 individuals completed the pre and post-intervention assessments; 194 and 185 were males (51.1%) and females (48.7%), respectively. A notable 91.3% of participants did not know about schistosomiasis in the pre-intervention phase; however, the mean post-intervention test score (9.4 ± 1.4) for knowledge among participants was higher than the pre-intervention test score (2.2 ± 2.1) indicating a good and improved knowledge of schistosomiasis among the participants. Furthermore, the paired samples t-test results demonstrated that the increase in knowledge levels was statistically significant (p<0.001). Also, the post-intervention improvement of both practice (p<0.001) and attitude (p<0.001) levels was statistically significant. A positive correlation (r=0.23, p<0.001) was found between knowledge and attitude in the pre-intervention stage. Knowledgeable participants had a more positive attitude towards obtaining medical assistance and disease prevention. Moreover, attitudes and practices correlated negatively (r=-0.13, p=0.013) post-intervention; hence, those with positive attitudes did not engage in risky water-related practices, which was the desired outcome. The educational intervention had a favourable impact on the KAP of the study population as the majority were able to recall the disease aetiology, symptoms, transmission pattern, and preventative measures three months post-intervention. Nevertheless, previous research has suggested that participants were unable to recall information about the disease following the intervention. Consequently, research should prioritize behavioural modification strategies that may result in a more persistent outcome in terms of the participants' knowledge, which could ultimately contribute to the development of long-term positive attitudes and practices.

Keywords: educational intervention, knowledge, attitudes and practices, schistosomiasis

Procedia PDF Downloads 17
965 Numerical Investigation of Flow Boiling within Micro-Channels in the Slug-Plug Flow Regime

Authors: Anastasios Georgoulas, Manolia Andredaki, Marco Marengo

Abstract:

The present paper investigates the hydrodynamics and heat transfer characteristics of slug-plug flows under saturated flow boiling conditions within circular micro-channels. Numerical simulations are carried out, using an enhanced version of the open-source CFD-based solver ‘interFoam’ of OpenFOAM CFD Toolbox. The proposed user-defined solver is based in the Volume Of Fluid (VOF) method for interface advection, and the mentioned enhancements include the implementation of a smoothing process for spurious current reduction, the coupling with heat transfer and phase change as well as the incorporation of conjugate heat transfer to account for transient solid conduction. In all of the considered cases in the present paper, a single phase simulation is initially conducted until a quasi-steady state is reached with respect to the hydrodynamic and thermal boundary layer development. Then, a predefined and constant frequency of successive vapour bubbles is patched upstream at a certain distance from the channel inlet. The proposed numerical simulation set-up can capture the main hydrodynamic and heat transfer characteristics of slug-plug flow regimes within circular micro-channels. In more detail, the present investigation is focused on exploring the interaction between subsequent vapour slugs with respect to their generation frequency, the hydrodynamic characteristics of the liquid film between the generated vapour slugs and the channel wall as well as of the liquid plug between two subsequent vapour slugs. The proposed investigation is carried out for the 3 different working fluids and three different values of applied heat flux in the heated part of the considered microchannel. The post-processing and analysis of the results indicate that the dynamics of the evolving bubbles in each case are influenced by both the upstream and downstream bubbles in the generated sequence. In each case a slip velocity between the vapour bubbles and the liquid slugs is evident. In most cases interfacial waves appear close to the bubble tail that significantly reduce the liquid film thickness. Finally, in accordance with previous investigations vortices that are identified in the liquid slugs between two subsequent vapour bubbles can significantly enhance the convection heat transfer between the liquid regions and the heated channel walls. The overall results of the present investigation can be used to enhance the present understanding by providing better insight of the complex, underpinned heat transfer mechanisms in saturated boiling within micro-channels in the slug-plug flow regime.

Keywords: slug-plug flow regime, micro-channels, VOF method, OpenFOAM

Procedia PDF Downloads 265
964 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade

Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria

Abstract:

In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.

Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics

Procedia PDF Downloads 88
963 Translating Creativity to an Educational Context: A Method to Augment the Professional Training of Newly Qualified Secondary School Teachers

Authors: Julianne Mullen-Williams

Abstract:

This paper will provide an overview of a three year mixed methods research project that explores if methods from the supervision of dramatherapy can augment the occupational psychology of newly qualified secondary school teachers. It will consider how creativity and the use of metaphor, as applied in the supervision of dramatherapists, can be translated to an educational context in order to explore the explicit / implicit dynamics between the teacher trainee/ newly qualified teacher and the organisation in order to support the super objective in training for teaching; how to ‘be a teacher.’ There is growing evidence that attrition rates among teachers are rising after only five years of service owing to too many national initiatives, an unmanageable curriculum and deteriorating student discipline. The fieldwork conducted entailed facilitating a reflective space for Newly Qualified Teachers from all subject areas, using methods from the supervision of dramatherapy, to explore the social and emotional aspects of teaching and learning with the ultimate aim of improving the occupational psychology of teachers. Clinical supervision is a formal process of professional support and learning which permits individual practitioners in frontline service jobs; counsellors, psychologists, dramatherapists, social workers and nurses to expand their knowledge and proficiency, take responsibility for their own practice, and improve client protection and safety of care in complex clinical situations. It is deemed integral to continued professional practice to safeguard vulnerable people and to reduce practitioner burnout. Dramatherapy supervision incorporates all of the above but utilises creative methods as a tool to gain insight and a deeper understanding of the situation. Creativity and the use of metaphor enable the supervisee to gain an aerial view of the situation they are exploring. The word metaphor in Greek means to ‘carry across’ indicating a transfer of meaning form one frame of reference to another. The supervision support was incorporated into each group’s induction training programme. The first year group attended fortnightly one hour sessions, the second group received two one hour sessions every term. The existing literature on the supervision and mentoring of secondary school teacher trainees calls for changes in pre-service teacher education and in the induction period. There is a particular emphasis on the need to include reflective and experiential learning, within training programmes and within the induction period, in order to help teachers manage the interpersonal dynamics and emotional impact within a high pressurised environment

Keywords: dramatherapy supervision, newly qualified secondary school teachers, professional development, teacher education

Procedia PDF Downloads 387
962 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 150
961 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 145
960 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement

Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies

Abstract:

Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.

Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement

Procedia PDF Downloads 255
959 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques

Authors: Melese Wondatir

Abstract:

Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.

Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making

Procedia PDF Downloads 67
958 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 269
957 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 103
956 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 186
955 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students

Authors: Stephanie Jo Marchese

Abstract:

This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.

Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods

Procedia PDF Downloads 288
954 Fuel Cells Not Only for Cars: Technological Development in Railways

Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz

Abstract:

Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.

Keywords: railway, hydrogen, fuel cells, hybrid vehicles

Procedia PDF Downloads 187
953 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 75