Search results for: Community Based Disaster Risk Management (CBDRM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38522

Search results for: Community Based Disaster Risk Management (CBDRM)

242 The Immunology Evolutionary Relationship between Signal Transducer and Activator of Transcription Genes from Three Different Shrimp Species in Response to White Spot Syndrome Virus Infection

Authors: T. C. C. Soo, S. Bhassu

Abstract:

Unlike the common presence of both innate and adaptive immunity in vertebrates, crustaceans, in particular, shrimps, have been discovered to possess only innate immunity. This further emphasizes the importance of innate immunity within shrimps in pathogenic resistance. Under the study of pathogenic immune challenge, different shrimp species actually exhibit varying degrees of immune resistance towards the same pathogen. Furthermore, even within the same shrimp species, different batches of challenged shrimps can have different strengths of immune defence. Several important pathways are activated within shrimps during pathogenic infection. One of them is JAK-STAT pathway that is activated during bacterial, viral and fungal infections by which STAT(Signal Transducer and Activator of Transcription) gene is the core element of the pathway. Based on theory of Central Dogma, the genomic information is transmitted in the order of DNA, RNA and protein. This study is focused in uncovering the important evolutionary patterns present within the DNA (non-coding region) and RNA (coding region). The three shrimp species involved are Macrobrachium rosenbergii, Penaeus monodon and Litopenaeus vannamei which all possess commercial significance. The shrimp species were challenged with a famous penaeid shrimp virus called white spot syndrome virus (WSSV) which can cause serious lethality. Tissue samples were collected during time intervals of 0h, 3h, 6h, 12h, 24h, 36h and 48h. The DNA and RNA samples were then extracted using conventional kits from the hepatopancreas tissue samples. PCR technique together with designed STAT gene conserved primers were utilized for identification of the STAT coding sequences using RNA-converted cDNA samples and subsequent characterization using various bioinformatics approaches including Ramachandran plot, ProtParam and SWISS-MODEL. The varying levels of immune STAT gene activation for the three shrimp species during WSSV infection were confirmed using qRT-PCR technique. For one sample, three biological replicates with three technical replicates each were used for qRT-PCR. On the other hand, DNA samples were important for uncovering the structural variations within the genomic region of STAT gene which would greatly assist in understanding the STAT protein functional variations. The partially-overlapping primers technique was used for the genomic region sequencing. The evolutionary inferences and event predictions were then conducted through the Bayesian Inference method using all the acquired coding and non-coding sequences. This was supplemented by the construction of conventional phylogenetic trees using Maximum likelihood method. The results showed that adaptive evolution caused STAT gene sequence mutations between different shrimp species which led to evolutionary divergence event. Subsequently, the divergent sites were correlated to the differing expressions of STAT gene. Ultimately, this study assists in knowing the shrimp species innate immune variability and selection of disease resistant shrimps for breeding purpose. The deeper understanding of STAT gene evolution from the perspective of both purifying and adaptive approaches not only can provide better immunological insight among shrimp species, but also can be used as a good reference for immunological studies in humans or other model organisms.

Keywords: gene evolution, JAK-STAT pathway, immunology, STAT gene

Procedia PDF Downloads 122
241 Microplastics in Urban Environment – Coimbra City Case Study

Authors: Inês Amorim Leitão, Loes van Shaick, António Dinis Ferreira, Violette Geissen

Abstract:

Plastic pollution is a growing concern worldwide: plastics are commercialized in large quantities and it takes a long time for them to degrade. When in the environment, plastic is fragmented into microplastics (<5mm), which have been found in all environmental compartments at different locations. Microplastics contribute to the environmental pollution in water, air and soil and are linked to human health problems. The progressive increase of population living in cities led to the aggravation of the pollution problem worldwide, especially in urban environments. Urban areas represent a strong source of pollution, through the roads, industrial production, wastewater, landfills, etc. It is expected that pollutants such as microplastics are transported diffusely from the sources through different pathways such as wind and rain. Therefore, it is very complex to quantify, control and treat these pollutants, designated current problematic issues by the European Commission. Green areas are pointed out by experts as natural filters for contaminants in cities, through their capacity of retention by vegetation. These spaces have thus the capacity to control the load of pollutants transported. This study investigates the spatial distribution of microplastics in urban soils of different land uses, their transport through atmospheric deposition, wind erosion, runoff and streams, as well as their deposition in vegetation like grass and tree leaves in urban environment. Coimbra, a medium large city located in the central Portugal, is the case-study. All the soil, sediments, water and vegetation samples were collected in Coimbra and were later analyzed in the Wageningen University & Research laboratory. Microplastics were extracted through the density separation using Sodium Phosphate as solution (~1.4 g cm−3) and filtration methods, visualized under a stereo microscope and identified using the u-FTIR method. Microplastic particles were found in all the different samples. In terms of soils, higher concentrations of microplastics were found in green parks, followed by landfills and industrial places, and the lowest concentrations in forests and pasture land-uses. Atmospheric deposition and streams after rainfall events seems to represent the strongest pathways of microplastics. Tree leaves can retain microplastics on their surfaces. Small leaves such as needle leaves seem to present higher amounts of microplastics per leaf area than bigger leaves. Rainfall episodes seem to reduce the concentration of microplastics on leaves surface, which suggests the wash of microplastics down to lower levels of the tree or to the soil. When in soil, different types of microplastics could be transported to the atmosphere through wind erosion. Grass seems to present high concentrations of microplastics, and the enlargement of the grass cover leads to a reduction of the amount of microplastics in soil, but also of the microplastics moved from the ground to the atmosphere by wind erosion. This study proof that vegetation can help to control the transport and dispersion of microplastics. In order to control the entry and the concentration of microplastics in the environment, especially in cities, it is essential to defining and evaluating nature-based land-use scenarios, considering the role of green urban areas in filtering small particles.

Keywords: microplastics, cities, sources, pathways, vegetation

Procedia PDF Downloads 23
240 International Collaboration: Developing the Practice of Social Work Curriculum through Study Abroad and Participatory Research

Authors: Megan Lindsey

Abstract:

Background: Globalization presents international social work with both opportunities and challenges. Thus, the design of this international experience aligns with the three charges of the Commission on Global Social Work Education. An international collaborative effort between an American and Scottish University Social Work Program was based on an established University agreement. The presentation provides an overview of an international study abroad among American and Scottish Social Work students. Further, presenters will discuss the opportunities of international collaboration and the challenges of the project. First, we will discuss the process of a successful international collaboration. This discussion will include the planning, collaboration, execution of the experience, along with its application to the international field of social work. Second, we will discuss the development and implementation of participatory action research in which the student engage to enhance their learning experience. A collaborative qualitative research project was undertaken with three goals. First, students gained experience in Scottish social services, including agency visits and presentations. Second, a collaboration between American and Scottish MSW Students allowed the exchange of ideas and knowledge about services and social work education. Third, students collaborated on a qualitative research method to reflect on their social work education and the formation of their professional identity. Methods/Methodology: American and Scottish students engaged in participatory action research by using Photovoice methods while studying together in Scotland. The collaboration between faculty researchers framed a series of research questions. Both universities obtained IRB approval and trained students in Photovoice methods. The student teams used the research question and Photovoice method to discover images that represented their professional identity formation. Two Photovoice goals grounded the study's research question. First, the methods enabled the individual students to record and reflect on their professional strengths and concerns. Second, student teams promoted critical dialogue and knowledge about personal and professional issues through large and small group discussions of photographs. Results: The international participatory approach generated the ability for students to contextualize their common social work education and practice experiences. Team discussions between representatives of each country resulted in understanding professional identity formation and the processes of social work education that contribute to that identity. Students presented the photograph narration of their knowledge and understanding of international social work education and practice. Researchers then collaborated on finding common themes. The results found commonalities in the quality and depth of social work education. The themes found differences regarding how professional identity is formed. Students found great differences between their and American accreditation and certification. Conclusions: Faculty researchers’ collaboration themes sought to categorize the students’ experiences of their professional identity. While the social work education systems are similar, there are vast differences. The Scottish themes noted structures within American social work not found in the United Kingdom. The American researchers noted that Scotland, as does the United Kingdom, relies on programs, agencies, and the individual social worker to provide structure to identity formation. Other themes will be presented.

Keywords: higher education curriculum, international collaboration, social sciences, action research

Procedia PDF Downloads 94
239 Strengths Profiling: An Alternative Approach to Assessing Character Strengths Based on Personal Construct Psychology

Authors: Sam J. Cooley, Mary L. Quinton, Benjamin J. Parry, Mark J. G. Holland, Richard J. Whiting, Jennifer Cumming

Abstract:

Practitioners draw attention to people’s character strengths to promote empowerment and well-being. This paper explores the possibility that existing approaches for assessing character strengths (e.g., the Values in Action survey; VIA-IS) could be even more autonomy supportive and empowering when combined with strengths profiling, an ideographic tool informed by personal construct theory (PCT). A PCT approach ensures that: (1) knowledge is co-created (i.e., the practitioner is not seen as the ‘expert’ who leads the process); (2) individuals are not required to ‘fit’ within a prescribed list of characteristics; and (3) individuals are free to use their own terminology and interpretations. A combined Strengths Profiling and VIA approach was used in a sample of homeless youth (aged 16-25) who are commonly perceived as ‘hard-to-engage’ through traditional forms of assessment. Strengths Profiling was completed face-to-face in small groups. Participants (N = 116) began by listing a variety of personally meaningful characteristics. Participants gave each characteristic a score out of ten for how important it was to them (1 = not so important; 10 = very important), their ideal competency, and their current competency (1 = poor; 10 = excellent). A discrepancy score was calculated for each characteristic (discrepancy score = ideal score - current score x importance), whereby a lower discrepancy score indicated greater satisfaction. Strengths Profiling was used at the beginning and end of a 10-week positive youth development programme. Experiences were captured through video diary room entries made by participants and through reflective notes taken by the facilitators. Participants were also asked to complete a pre-and post-programme questionnaire, measuring perceptions of well-being, self-worth, and resilience. All of the young people who attended the strengths profiling session agreed to complete a profile, and the majority became highly engaged in the process. Strengths profiling was found to be an autonomy supportive and empowering experience, with each participant identifying an average of 10 character strengths (M = 10.27, SD = 3.23). In total, 215 different character strengths were identified, each with varying terms and definitions used, which differed greatly between participants and demonstrated the value in soliciting personal constructs. Using the participants’ definitions, 98% of characteristics were categorized deductively into the VIA framework. Bravery, perseverance, and hope were the character strengths that featured most, whilst temperance and courage received the highest discrepancy scores. Discrepancy scores were negatively correlated with well-being, self-worth, and resilience, and meaningful improvements were recorded following the intervention. These findings support the use of strengths profiling as a theoretically-driven and novel way to engage disadvantaged youth in identifying and monitoring character strengths. When young people are given the freedom to express their own characteristics, the resulting terminologies extend beyond the language used in existing frameworks. This added freedom and control over the process of strengths identification encouraged youth to take ownership over their profiles and apply their strengths. In addition, the ability to transform characteristics post hoc into the VIA framework means that strengths profiling can be used to explore aggregated/nomothetic hypotheses, whilst still benefiting from its ideographic roots.

Keywords: ideographic, nomothetic, positive youth development, VIA-IS, assessment, homeless youth

Procedia PDF Downloads 173
238 Gendered Water Insecurity: a Structural Equation Approach for Female-Headed Households in South Africa

Authors: Saul Ngarava, Leocadia Zhou, Nomakhaya Monde

Abstract:

Water crises have the fourth most significant societal impact after weapons of mass destruction, climate change, and extreme weather conditions, ahead of natural disasters. Intricacies between women and water are central to achieving the 2030 Sustainable Development Goals (SDGs). The majority of the 1.2 billion poor people worldwide, with two-thirds being women, and mostly located in Sub Sahara Africa (SSA) and South Asia, do not have access to safe and reliable sources of water. There exist gendered differences in water security based on the division of labour associating women with water. Globally, women and girls are responsible for water collection in 80% of the households which have no water on their premises. Women spend 16 million hours a day collecting water, while men and children spend 6 million and 4 million per day, respectively, which is time foregone in the pursuit of other livelihood activities. Due to their proximity and activities concerning water, women are vulnerable to water insecurity through exposures to water-borne diseases, fatigue from physically carrying water, and exposure to sexual and physical harassment, amongst others. Proximity to treated water and their wellbeing also has an effect on their sensitivity and adaptive capacity to water insecurity. The great distances, difficult terrain and heavy lifting expose women to vulnerabilities of water insecurity. However, few studies have quantified the vulnerabilities and burdens on women, with a few taking a phenomenological qualitative approach. Vulnerability studies have also been scanty in the water security realm, with most studies taking linear forms of either quantifying exposures, sensitivities or adaptive capacities in climate change studies. The current study argues for the need for a water insecurity vulnerability assessment, especially for women into research agendas as well as policy interventions, monitoring, and evaluation. The study sought to identify and provide pathways through which female-headed households were water insecure in South Africa, the 30th driest country in the world. This was through linking the drinking water decision as well as the vulnerability frameworks. Secondary data collected during the 2016 General Household Survey (GHS) was utilised, with a sample of 5928 female-headed households. Principal Component Analysis and Structural Equation Modelling were used to analyse the data. The results show dynamic relationships between water characteristics and water treatment. There were also associations between water access and wealth status of the female-headed households. Association was also found between water access and water treatment as well as between wealth status and water treatment. The study concludes that there are dynamic relationships in water insecurity (exposure, sensitivity, and adaptive capacity) for female-headed households in South Africa. The study recommends that a multi-prong approach is required in tackling exposures, sensitivities, and adaptive capacities to water insecurity. This should include capacitating and empowering women for wealth generation, improve access to water treatment equipment as well as prioritising the improvement of infrastructure that brings piped and safe water to female-headed households.

Keywords: gender, principal component analysis, structural equation modelling, vulnerability, water insecurity

Procedia PDF Downloads 95
237 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 19
236 Quality of Life Among People with Mental Illness Attending a Psychiatric Outpatient Clinic in Ethiopia: A Structural Equation Model

Authors: Wondale Getinet Alemu, Lillian Mwanri, Clemence Due, Telake Azale, Anna Ziersch

Abstract:

Background: Mental illness is one of the most severe, chronic, and disabling public health problems that affect patients' Quality of life (QoL). Improving the QoL for people with mental illness is one of the most critical steps in stopping disease progression and avoiding complications of mental illness. Therefore, we aimed to assess the QoL and its determinants in patients with mental illness in outpatient clinics in Northwest Ethiopia in 2023. Methods: A facility-based cross-sectional study was conducted among people with mental illness in an outpatient clinic in Ethiopia. The sampling interval was decided by dividing the total number of study participants who had a follow-up appointment during the data collection period (2400) by the total sample size of 638, with the starting point selected by lottery method. The interviewer-administered WHOQOL BREF-26 tool was used to measure the QoL of people with mental illness. The domains and Health-Related Quality of Life (HRQoL) were identified. The indirect and direct effects of variables were calculated using structural equation modeling with SPSS-28 and Amos-28 software. A p-value of < 0.05 and a 95% CI were used to evaluate statistical significance. Results: A total of 636 (99.7%) participants responded and completed the WHOQOL-BREF questionnaire. The mean score of overall HRQoL of people with mental illness in the outpatient clinic was (49.6 ± 10 Sd). The highest QoL was found in the physical health domain (50.67 ±9.5 Sd), and the lowest mean QoL was found in the psychological health domain (48.41±10 Sd). Rural residents, drug nonadherence, suicidal ideation, not getting counseling, moderate or severe subjective severity, the family does not participate in patient care, and a family history of mental illness had an indirect negative effect on HRQoL. Alcohol use and psychological health domain had a direct positive effect on QoL. Furthermore, objective severity of illness, having low self-esteem, and having a history of mental illness in the family had both direct and indirect effects on QoL. Furthermore, sociodemographic factors (residence, educational status, marital status), social support-related factors (self-esteem, family not participating in patient care), substance use factors (alcohol use, tobacco use,) and clinical factors (objective and subjective severity of illness, not getting counseling, suicidal ideation, number of episodes, comorbid illness, family history of mental illness, poor drug adherence) directly and indirectly affected QoL. Conclusions: In this study, the QoL of people with mental illness was poor, with the psychological health domain being the most affected. Sociodemographic factors, social support-related factors, drug use factors, and clinical factors directly and indirectly, affect QoL through the mediator variables of physical health domains, psychological health domains, social relation health domains, and environmental health domains. In order to improve the QoL of people with mental illnesses, we recommend that emphasis be given to addressing the scourge of mental health, including the development of policy and practice drivers that address the above-identified factors.

Keywords: quality of life, mental wellbeing, mental illness, mental disorder, Ethiopia

Procedia PDF Downloads 39
235 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 109
234 Development of Cost Effective Ultra High Performance Concrete by Using Locally Available Materials

Authors: Mohamed Sifan, Brabha Nagaratnam, Julian Thamboo, Keerthan Poologanathan

Abstract:

Ultra high performance concrete (UHPC) is a type of cementitious material known for its exceptional strength, ductility, and durability. However, its production is often associated with high costs due to the significant amount of cementitious materials required and the use of fine powders to achieve the desired strength. The aim of this research is to explore the feasibility of developing cost-effective UHPC mixes using locally available materials. Specifically, the study aims to investigate the use of coarse limestone sand along with other sand types, namely, basalt sand, dolomite sand, and river sand for developing UHPC mixes and evaluating its performances. The study utilises the particle packing model to develop various UHPC mixes. The particle packing model involves optimising the combination of coarse limestone sand, basalt sand, dolomite sand, and river sand to achieve the desired properties of UHPC. The developed UHPC mixes are then evaluated based on their workability (measured through slump flow and mini slump value), compressive strength (at 7, 28, and 90 days), splitting tensile strength, and microstructural characteristics analysed through scanning electron microscope (SEM) analysis. The results of this study demonstrate that cost-effective UHPC mixes can be developed using locally available materials without the need for silica fume or fly ash. The UHPC mixes achieved impressive compressive strengths of up to 149 MPa at 28 days with a cement content of approximately 750 kg/m³. The mixes also exhibited varying levels of workability, with slump flow values ranging from 550 to 850 mm. Additionally, the inclusion of coarse limestone sand in the mixes effectively reduced the demand for superplasticizer and served as a filler material. By exploring the use of coarse limestone sand and other sand types, this study provides valuable insights into optimising the particle packing model for UHPC production. The findings highlight the potential to reduce costs associated with UHPC production without compromising its strength and durability. The study collected data on the workability, compressive strength, splitting tensile strength, and microstructural characteristics of the developed UHPC mixes. Workability was measured using slump flow and mini slump tests, while compressive strength and splitting tensile strength were assessed at different curing periods. Microstructural characteristics were analysed through SEM and energy dispersive X-ray spectroscopy (EDS) analysis. The collected data were then analysed and interpreted to evaluate the performance and properties of the UHPC mixes. The research successfully demonstrates the feasibility of developing cost-effective UHPC mixes using locally available materials. The inclusion of coarse limestone sand, in combination with other sand types, shows promising results in achieving high compressive strengths and satisfactory workability. The findings suggest that the use of the particle packing model can optimise the combination of materials and reduce the reliance on expensive additives such as silica fume and fly ash. This research provides valuable insights for researchers and construction practitioners aiming to develop cost-effective UHPC mixes using readily available materials and an optimised particle packing approach.

Keywords: cost-effective, limestone powder, particle packing model, ultra high performance concrete

Procedia PDF Downloads 65
233 Speech and Swallowing Function after Tonsillo-Lingual Sulcus Resection with PMMC Flap Reconstruction: A Case Study

Authors: K. Rhea Devaiah, B. S. Premalatha

Abstract:

Background: Tonsillar Lingual sulcus is the area between the tonsils and the base of the tongue. The surgical resection of the lesions in the head and neck results in changes in speech and swallowing functions. The severity of the speech and swallowing problem depends upon the site and extent of the lesion, types and extent of surgery and also the flexibility of the remaining structures. Need of the study: This paper focuses on the importance of speech and swallowing rehabilitation in an individual with the lesion in the Tonsillar Lingual Sulcus and post-operative functions. Aim: Evaluating the speech and swallow functions post-intensive speech and swallowing rehabilitation. The objectives are to evaluate the speech intelligibility and swallowing functions after intensive therapy and assess the quality of life. Method: The present study describes a report of an individual aged 47years male, with the diagnosis of basaloid squamous cell carcinoma, left tonsillar lingual sulcus (pT2n2M0) and underwent wide local excision with left radical neck dissection with PMMC flap reconstruction. Post-surgery the patient came with a complaint of reduced speech intelligibility, and difficulty in opening the mouth and swallowing. Detailed evaluation of the speech and swallowing functions were carried out such as OPME, articulation test, speech intelligibility, different phases of swallowing and trismus evaluation. Self-reported questionnaires such as SHI-E(Speech handicap Index- Indian English), DHI (Dysphagia handicap Index) and SESEQ -K (Self Evaluation of Swallowing Efficiency in Kannada) were also administered to know what the patient feels about his problem. Based on the evaluation, the patient was diagnosed with pharyngeal phase dysphagia associated with trismus and reduced speech intelligibility. Intensive speech and swallowing therapy was advised weekly twice for the duration of 1 hour. Results: Totally the patient attended 10 intensive speech and swallowing therapy sessions. Results indicated misarticulation of speech sounds such as lingua-palatal sounds. Mouth opening was restricted to one finger width with difficulty chewing, masticating, and swallowing the bolus. Intervention strategies included Oro motor exercise, Indirect swallowing therapy, usage of a trismus device to facilitate mouth opening, and change in the food consistency to help to swallow. A practice session was held with articulation drills to improve the production of speech sounds and also improve speech intelligibility. Significant changes in articulatory production and speech intelligibility and swallowing abilities were observed. The self-rated quality of life measures such as DHI, SHI and SESE Q-K revealed no speech handicap and near-normal swallowing ability indicating the improved QOL after the intensive speech and swallowing therapy. Conclusion: Speech and swallowing therapy post carcinoma in the tonsillar lingual sulcus is crucial as the tongue plays an important role in both speech and swallowing. The role of Speech-language and swallowing therapists in oral cancer should be highlighted in treating these patients and improving the overall quality of life. With intensive speech-language and swallowing therapy post-surgery for oral cancer, there can be a significant change in the speech outcome and swallowing functions depending on the site and extent of lesions which will thereby improve the individual’s QOL.

Keywords: oral cancer, speech and swallowing therapy, speech intelligibility, trismus, quality of life

Procedia PDF Downloads 82
232 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients

Authors: Lise Paesen, Marielle Leijten

Abstract:

People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.

Keywords: Alzheimer's disease, experimental design, language decline, writing process

Procedia PDF Downloads 251
231 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules

Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel

Abstract:

The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.

Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling

Procedia PDF Downloads 103
230 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes

Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov

Abstract:

Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.

Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section

Procedia PDF Downloads 127
229 Challenges and Lessons of Mentoring Processes for Novice Principals: An Exploratory Case Study of Induction Programs in Chile

Authors: Carolina Cuéllar, Paz González

Abstract:

Research has shown that school leadership has a significant indirect effect on students’ achievements. In Chile, evidence has also revealed that this impact is stronger in vulnerable schools. With the aim of strengthening school leadership, public policy has taken up the challenge of enhancing capabilities of novice principals through the implementation of induction programs, which include a mentoring component, entrusting the task of delivering these programs to universities. The importance of using mentoring or coaching models in the preparation of novice school leaders has been emphasized in the international literature. Thus, it can be affirmed that building leadership capacity through partnership is crucial to facilitate cognitive and affective support required in the initial phase of the principal career, gain role clarification and socialization in context, stimulate reflective leadership practice, among others. In Chile, mentoring is a recent phenomenon in the field of school leadership and it is even more new in the preparation of new principals who work in public schools. This study, funded by the Chilean Ministry of Education, sought to explore the challenges and lessons arising from the design and implementation of mentoring processes which are part of the induction programs, according to the perception of the different actors involved: ministerial agents, university coordinators, mentors and novice principals. The investigation used a qualitative design, based on a study of three cases (three induction programs). The sources of information were 46 semi-structured interviews, applied in two moments (at the beginning and end of mentoring). Content analysis technique was employed. Data focused on the uniqueness of each case and the commonalities within the cases. Five main challenges and lessons emerged in the design and implementation of mentoring within the induction programs for new principals from Chilean public schools. They comprised the need of (i) developing a shared conceptual framework on mentoring among the institutions and actors involved, which helps align the expectations for the mentoring component within the induction programs, along with assisting in establishing a theory of action of mentoring that is relevant to the public school context; (ii) recognizing trough actions and decisions at different levels that the role of a mentor differs from the role of a principal, which challenge the idea that an effective principal will always be an effective mentor; iii) improving mentors’ selection and preparation processes trough the definition of common guiding criteria to ensure that a mentor takes responsibility for developing critical judgment of novice principals, which implies not limiting the mentor’s actions to assist in the compliance of prescriptive practices and standards; (iv) generating common evaluative models with goals, instruments and indicators consistent with the characteristics of mentoring processes, which helps to assess expected results and impact; and (v) including the design of a mentoring structure as an outcome of the induction programs, which helps sustain mentoring within schools as a collective professional development practice. Results showcased interwoven elements that entail continuous negotiations at different levels. Taking action will contribute to policy efforts aimed at professionalizing the leadership role in public schools.

Keywords: induction programs, mentoring, novice principals, school leadership preparation

Procedia PDF Downloads 102
228 Implementation of Smart Card Automatic Fare Collection Technology in Small Transit Agencies for Standards Development

Authors: Walter E. Allen, Robert D. Murray

Abstract:

Many large transit agencies have adopted RFID technology and electronic automatic fare collection (AFC) or smart card systems, but small and rural agencies remain tied to obsolete manual, cash-based fare collection. Small countries or transit agencies can benefit from the implementation of smart card AFC technology with the promise of increased passenger convenience, added passenger satisfaction and improved agency efficiency. For transit agencies, it reduces revenue loss, improves passenger flow and bus stop data. For countries, further implementation into security, distribution of social services or currency transactions can provide greater benefits. However, small countries or transit agencies cannot afford expensive proprietary smart card solutions typically offered by the major system suppliers. Deployment of Contactless Fare Media System (CFMS) Standard eliminates the proprietary solution, ultimately lowering the cost of implementation. Acumen Building Enterprise, Inc. chose the Yuma County Intergovernmental Public Transportation Authority (YCIPTA) existing proprietary YCAT smart card system to implement CFMS. The revised system enables the purchase of fare product online with prepaid debit or credit cards using the Payment Gateway Processor. Open and interoperable smart card standards for transit have been developed. During the 90-day Pilot Operation conducted, the transit agency gathered the data from the bus AcuFare 200 Card Reader, loads (copies) the data to a USB Thumb Drive and uploads the data to the Acumen Host Processing Center for consolidation of the data into the transit agency master data file. The transition from the existing proprietary smart card data format to the new CFMS smart card data format was transparent to the transit agency cardholders. It was proven that open standards and interoperability design can work and reduce both implementation and operational costs for small transit agencies or countries looking to expand smart card technology. Acumen was able to avoid the implementation of the Payment Card Industry (PCI) Data Security Standards (DSS) which is expensive to develop and costly to operate on a continuing basis. Due to the substantial additional complexities of implementation and the variety of options presented to the transit agency cardholder, Acumen chose to implement only the Directed Autoload. To improve the implementation efficiency and the results for a similar undertaking, it should be considered that some passengers lack credit cards and are averse to technology. There are more than 1,300 small and rural agencies in the United States. This grows by 10 fold when considering small countries or rural locations throughout Latin American and the world. Acumen is evaluating additional countries, sites or transit agency that can benefit from the smart card systems. Frequently, payment card systems require extensive security procedures for implementation. The Project demonstrated the ability to purchase fare value, rides and passes with credit cards on the internet at a reasonable cost without highly complex security requirements.

Keywords: automatic fare collection, near field communication, small transit agencies, smart cards

Procedia PDF Downloads 255
227 The Importance of Value Added Services Provided by Science and Technology Parks to Boost Entrepreneurship Ecosystem in Turkey

Authors: Faruk Inaltekin, Imran Gurakan

Abstract:

This paper will aim to discuss the importance of value-added services provided by Science and Technology Parks for entrepreneurship development in Turkey. Entrepreneurship is vital subject for all countries. It has not only fostered economic development but also promoted innovation at local and international levels. To foster high tech entrepreneurship ecosystem, Technopark (Science and Technology Park/STP) concept was initiated with the establishment of Silicon Valley in the 1950s. The success and rise of Silicon Valley led to the spread of technopark activities. Developed economies have been setting up projects to plan and build STPs since the 1960s and 1970s. To promote the establishment of STPs, necessary legislations were made by Ministry of Science, Industry, and Technology in 2001, Technology Development Zones Law (No. 4691) and it has been revised in 2016 to provide more supports. STPs’ basic aim is to provide customers high-quality office spaces with various 'value added services' such as business development, network connections, cooperation programs, investor/customers meetings and internationalization services. For this aim, STPs should help startups deal with difficulties in the early stages and to support mature companies’ export activities in the foreign market. STPs should support the production, commercialization and more significantly internationalization of technology-intensive business and foster growth of companies. Nowadays within this value-added services, internationalization is very popular subject in the world. Most of STPs design clusters or accelerator programs in order to support their companies in the foreign market penetration. If startups are not ready for international competition, STPs should help them to get ready for foreign market with training and mentoring sessions. These training and mentoring sessions should take a goal based approach to working with companies. Each company has different needs and goals. Therefore the definition of ‘success' varies for each company. For this reason, it is very important to create customized value added services to meet the needs of startups. After local supports, STPs should also be able to support their startups in foreign market. Organizing well defined international accelerator program plays an important role in this mission. Turkey is strategically placed between key markets in Europe, Russia, Central Asia and the Middle East. Its population is young and well educated. So both government agencies and the private sectors endeavor to foster and encourage entrepreneurship ecosystem with many supports. In sum, the task of technoparks with these and similar value added services is very important for developing entrepreneurship ecosystem. The priorities of all value added services are to identify the commercialization and growth obstacles faced by entrepreneurs and get rid of them with the one-to-one customized services. Also, in order to have a healthy startup ecosystem and create sustainable entrepreneurship, stakeholders (technoparks, incubators, accelerators, investors, universities, governmental organizations etc.) should fulfill their roles and/or duties and collaborate with each other. STPs play an important role as bridge for these stakeholders & entrepreneurs. STPs always should benchmark and renew services offered to how to help the start-ups to survive, develop their business and benefit from these stakeholders.

Keywords: accelerator, cluster, entrepreneurship, startup, technopark, value added services

Procedia PDF Downloads 120
226 Quantum Dots Incorporated in Biomembrane Models for Cancer Marker

Authors: Thiago E. Goto, Carla C. Lopes, Helena B. Nader, Anielle C. A. Silva, Noelio O. Dantas, José R. Siqueira Jr., Luciano Caseli

Abstract:

Quantum dots (QD) are semiconductor nanocrystals that can be employed in biological research as a tool for fluorescence imagings, having the potential to expand in vivo and in vitro analysis as cancerous cell biomarkers. Particularly, cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) exhibit stable luminescence that is feasible for biological applications, especially for imaging of tumor cells. For these facts, it is interesting to know the mechanisms of action of how such QDs mark biological cells. For that, simplified models are a suitable strategy. Among these models, Langmuir films of lipids formed at the air-water interface seem to be adequate since they can mimic half a membrane. They are monomolecular films formed at liquid-gas interfaces that can spontaneously form when organic solutions of amphiphilic compounds are spread on the liquid-gas interface. After solvent evaporation, the monomolecular film is formed, and a variety of techniques, including tensiometric, spectroscopic and optic can be applied. When the monolayer is formed by membrane lipids at the air-water interface, a model for half a membrane can be inferred where the aqueous subphase serve as a model for external or internal compartment of the cell. These films can be transferred to solid supports forming the so-called Langmuir-Blodgett (LB) films, and an ampler variety of techniques can be additionally used to characterize the film, allowing for the formation of devices and sensors. With these ideas in mind, the objective of this work was to investigate the specific interactions of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and LB films of lipids and specific cell extracts as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers, constructed either of selected lipids or of non-tumorigenic and tumorigenic cells extracts. The quantum dots expanded the monolayers and changed the PM-IRRAS spectra for the lipid monolayers. The mixed films were then compressed to high surface pressures and transferred from the floating monolayer to solid supports by using the LB technique. Images of the films were then obtained with atomic force microscopy (AFM) and confocal microscopy, which provided information about the morphology of the films. Similarities and differences between films with different composition representing cell membranes, with or without CdSe MSQDs, was analyzed. The results indicated that the interaction of quantum dots with the bioinspired films is modulated by the lipid composition. The properties of the normal cell monolayer were not significantly altered, whereas for the tumorigenic cell monolayer models, the films presented significant alteration. The images therefore exhibited a stronger effect of CdSe MSQDs on the models representing cancerous cells. As important implication of these findings, one may envisage for new bioinspired surfaces based on molecular recognition for biomedical applications.

Keywords: biomembrane, langmuir monolayers, quantum dots, surfaces

Procedia PDF Downloads 168
225 Sustainability Communications Across Multi-Stakeholder Groups: A Critical Review of the Findings from the Hospitality and Tourism Sectors

Authors: Frederica Pettit

Abstract:

Contribution: Stakeholder involvement in CSR is essential to ensuring pro-environmental attitudes and behaviours across multi-stakeholder groups. Despite increased awareness of the benefits surrounding a collaborative approach to sustainability communications, its success is limited by difficulties engaging with active online conversations with stakeholder groups. Whilst previous research defines the effectiveness of sustainability communications; this paper contributes to knowledge through the development of a theoretical framework that explores the processes to achieving pro-environmental attitudes and behaviours in stakeholder groups. The research will also consider social media as an opportunity to communicate CSR information to all stakeholder groups. Approach: A systematic review was chosen to investigate the effectiveness of the types of sustainability communications used in the hospitality and tourism industries. The systematic review was completed using Web of Science and Scopus using the search terms “sustainab* communicat*” “effective or effectiveness,” and “hospitality or tourism,” limiting the results to peer-reviewed research. 133 abstracts were initially read, with articles being excluded for irrelevance, duplicated articles, non-empirical studies, and language. A total of 45 papers were included as part of the systematic review. 5 propositions were created based on the results of the systematic review, helping to develop a theoretical framework of the processes needed for companies to encourage pro-environmental behaviours across multi-stakeholder groups. Results: The theoretical framework developed in the paper determined the processes necessary for companies to achieve pro-environmental behaviours in stakeholders. The processes to achieving pro-environmental attitudes and behaviours are stakeholder-focused, identifying the need for communications to be specific to their targeted audience. Collaborative communications that enable stakeholders to engage with CSR information and provide feedback lead to a higher awareness of CSR shared visions and pro-environmental attitudes and behaviours. These processes should also aim to improve their relationships with stakeholders through transparency of CSR, CSR strategies that match stakeholder values and ethics whilst prioritizing sustainability as part of their job role. Alternatively, companies can prioritize pro-environmental behaviours using choice editing by mainstreaming sustainability as the only option. In recent years, there has been extensive research on social media as a viable source of sustainability communications, with benefits including direct interactions with stakeholders, the ability to enforce the authenticity of CSR activities and encouragement of pro-environmental behaviours. Despite this, there are challenges to implementing CSR, including difficulties controlling stakeholder criticisms, negative stakeholder influences and comments left on social media platforms. Conclusion: A lack of engagement with CSR information is a reoccurring reason for preventing pro-environmental attitudes and behaviours across stakeholder groups. Traditional CSR strategies contribute to this due to their inability to engage with their intended audience. Hospitality and tourism companies are improving stakeholder relationships through collaborative processes which reduce single-use plastic consumption. A collaborative approach to communications can lead to stakeholder satisfaction, leading to changes in attitudes and behaviours. Different sources of communications are accessed by different stakeholder groups, identifying the need for targeted sustainability messaging, creating benefits such as direct interactions with stakeholders, the ability to enforce the authenticity of CSR activities, and encouraging engagement with sustainability information.

Keywords: hospitality, pro-environmental attitudes and behaviours, sustainability communication, social media

Procedia PDF Downloads 105
224 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 66
223 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 240
222 Actinomycetes from Protected Forest Ecosystems of Assam, India: Diversity and Antagonistic Activity

Authors: Priyanka Sharma, Ranjita Das, Mohan C. Kalita, Debajit Thakur

Abstract:

Background: Actinomycetes are the richest source of novel bioactive secondary metabolites such as antibiotics, enzymes and other therapeutically useful metabolites with diverse biological activities. The present study aims at the antimicrobial potential and genetic diversity of culturable Actinomycetes isolated from protected forest ecosystems of Assam which includes Kaziranga National Park (26°30˝-26°45˝N and 93°08˝-93°36˝E), Pobitora Wildlife Sanctuary (26º12˝-26º16˝N and 91º58˝-92º05˝E) and Gibbon Wildlife Sanctuary (26˚40˝-26˚45˝N and 94˚20˝-94˚25˝E) which are located in the North-eastern part of India. Northeast India is a part of the Indo-Burma mega biodiversity hotspot and most of the protected forests of this region are still unexplored for the isolation of effective antibiotic-producing Actinomycetes. Thus, there is tremendous possibility that these virgin forests could be a potential storehouse of novel microorganisms, particularly Actinomycetes, exhibiting diverse biological properties. Methodology: Soil samples were collected from different ecological niches of the protected forest ecosystems of Assam and Actinomycetes were isolated by serial dilution spread plate technique using five selective isolation media. Preliminary screening of Actinomycetes for an antimicrobial activity was done by spot inoculation method and the secondary screening by disc diffusion method against several test pathogens, including multidrug resistant Staphylococcus aureus (MRSA). The strains were further screened for the presence of antibiotic synthetic genes such as type I polyketide synthases (PKS-I), type II polyketide synthases (PKS-II) and non-ribosomal peptide synthetases (NRPS) genes. Genetic diversity of the Actinomycetes producing antimicrobial metabolites was analyzed through 16S rDNA-RFLP using Hinf1 restriction endonuclease. Results: Based on the phenotypic characterization, a total of 172 morphologically distinct Actinomycetes were isolated and screened for antimicrobial activity by spot inoculation method on agar medium. Among the strains tested, 102 (59.3%) strains showed activity against Gram-positive bacteria, 98 (56.97%) against Gram-negative bacteria, 92 (53.48%) against Candida albicans MTCC 227 and 130 (75.58%) strains showed activity against at least one of the test pathogens. Twelve Actinomycetes exhibited broad spectrum antimicrobial activity in the secondary screening. The taxonomic identification of these twelve strains by 16S rDNA sequencing revealed that Streptomyces was found to be the predominant genus. The PKS-I, PKS-II and NRPS genes detection indicated diverse bioactive products of these twelve Actinomycetes. Genetic diversity by 16S rDNA-RFLP indicated that Streptomyces was the dominant genus amongst the antimicrobial metabolite producing Actinomycetes. Conclusion: These findings imply that Actinomycetes from the protected forest ecosystems of Assam, India, are a potential source of bioactive secondary metabolites. These areas are as yet poorly studied and represent diverse and largely unscreened ecosystem for the isolation of potent Actinomycetes producing antimicrobial secondary metabolites. Detailed characterization of the bioactive Actinomycetes as well as purification and structure elucidation of the bioactive compounds from the potent Actinomycetes is the subject of ongoing investigation. Thus, to exploit Actinomycetes from such unexplored forest ecosystems is a way to develop bioactive products.

Keywords: Actinomycetes, antimicrobial activity, forest ecosystems, RFLP

Procedia PDF Downloads 361
221 Baseline Data for Insecticide Resistance Monitoring in Tobacco Caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) on Cole Crops

Authors: Prabhjot Kaur, B.K. Kang, Balwinder Singh

Abstract:

The tobacco caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) is an agricultural important pest species. S. litura has a wide host range of approximately recorded 150 plant species worldwide. In Punjab, this pest attains sporadic status primarily on cauliflower, Brassica oleracea (L.). This pest destroys vegetable crop and particularly prefers the cruciferae family. However, it is also observed feeding on other crops such as arbi, Colocasia esculenta (L.), mung bean, Vigna radiata (L.), sunflower, Helianthus annuus (L.), cotton, Gossypium hirsutum (L.), castor, Ricinus communis (L.), etc. Larvae of this pest completely devour the leaves of infested plant resulting in huge crop losses which ranges from 50 to 70 per cent. Indiscriminate and continuous use of insecticides has contributed in development of insecticide resistance in insects and caused the environmental degradation as well. Moreover, a base line data regarding the toxicity of the newer insecticides would help in understanding the level of resistance developed in this pest and any possible cross-resistance there in, which could be assessed in advance. Therefore, present studies on development of resistance in S. litura against four new chemistry insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) were carried out in the Toxicology laboratory, Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India during the year 2011-12. Various stages of S. litura (eggs, larvae) were collected from four different locations (Malerkotla, Hoshiarpur, Amritsar and Samrala) of Punjab. Resistance is developed in third instars of lepidopterous pests. Therefore, larval bioassays were conducted to estimate the response of field populations of thirty third-instar larvae of S. litura under laboratory conditions at 25±2°C and 65±5 per cent relative humidity. Leaf dip bioassay technique with diluted insecticide formulations recommended by Insecticide Resistance Action Committee (IRAC) was performed in the laboratory with seven to ten treatments depending on the insecticide class, respectively. LC50 values were estimated by probit analysis after correction to record control mortality data which was used to calculate the resistance ratios (RR). The LC50 values worked out for emamectin benzoate, chlorantraniliprole, indoxacarb, spinosad are 0.081, 0.088, 0.380, 4.00 parts per million (ppm) against pest populations collected from Malerkotla; 0.051, 0.060, 0.250, 3.00 (ppm) of Amritsar; 0.002, 0.001, 0.0076, 0.10 ppm for Samrala and 0.000014, 0.00001, 0.00056, 0.003 ppm against pest population of Hoshiarpur, respectively. The LC50 values for populations collected from these four locations were in the order Malerkotla>Amritsar>Samrala>Hoshiarpur for the insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) tested. Based on LC50 values obtained, emamectin benzoate (0.000014 ppm) was found to be the most toxic among all the tested populations, followed by chlorantraniliprole (0.00001 ppm), indoxacarb (0.00056 ppm) and spinosad (0.003 ppm), respectively. The pairwise correlation coefficients of LC50 values indicated that there was lack of cross resistance for emamectin benzoate, chlorantraniliprole, spinosad, indoxacarb in populations of S. litura from Punjab. These insecticides may prove to be promising substitutes for the effective control of insecticide resistant populations of S. litura in Punjab state, India.

Keywords: Spodoptera litura, insecticides, toxicity, resistance

Procedia PDF Downloads 315
220 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 44
219 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 472
218 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing

Authors: Leonie Bradfield, Stephen Fityus, John Simmons

Abstract:

The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.

Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump

Procedia PDF Downloads 143
217 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 94
216 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 156
215 Exploring the Neural Mechanisms of Communication and Cooperation in Children and Adults

Authors: Sara Mosteller, Larissa K. Samuelson, Sobanawartiny Wijeakumar, John P. Spencer

Abstract:

This study was designed to examine how humans are able to teach and learn semantic information as well as cooperate in order to jointly achieve sophisticated goals. Specifically, we are measuring individual differences in how these abilities develop from foundational building blocks in early childhood. The current study adopts a paradigm for novel noun learning developed by Samuelson, Smith, Perry, and Spencer (2011) to a hyperscanning paradigm [Cui, Bryant and Reiss, 2012]. This project measures coordinated brain activity between a parent and child using simultaneous functional near infrared spectroscopy (fNIRS) in pairs of 2.5, 3.5 and 4.5-year-old children and their parents. We are also separately testing pairs of adult friends. Children and parents, or adult friends, are seated across from one another at a table. The parent (in the developmental study) then teaches their child the names of novel toys. An experimenter then tests the child by presenting the objects in pairs and asking the child to retrieve one object by name. Children are asked to choose from both pairs of familiar objects and pairs of novel objects. In order to explore individual differences in cooperation with the same participants, each dyad plays a cooperative game of Jenga, in which their joint score is based on how many blocks they can remove from the tower as a team. A preliminary analysis of the noun-learning task showed that, when presented with 6 word-object mappings, children learned an average of 3 new words (50%) and that the number of objects learned by each child ranged from 2-4. Adults initially learned all of the new words but were variable in their later retention of the mappings, which ranged from 50-100%. We are currently examining differences in cooperative behavior during the Jenga playing game, including time spent discussing each move before it is made. Ongoing analyses are examining the social dynamics that might underlie the differences between words that were successfully learned and unlearned words for each dyad, as well as the developmental differences observed in the study. Additionally, the Jenga game is being used to better understand individual and developmental differences in social coordination during a cooperative task. At a behavioral level, the analysis maps periods of joint visual attention between participants during the word learning and the Jenga game, using head-mounted eye trackers to assess each participant’s first-person viewpoint during the session. We are also analyzing the coherence in brain activity between participants during novel word-learning and Jenga playing. The first hypothesis is that visual joint attention during the session will be positively correlated with both the number of words learned and with the number of blocks moved during Jenga before the tower falls. The next hypothesis is that successful communication of new words and success in the game will each be positively correlated with synchronized brain activity between the parent and child/the adult friends in cortical regions underlying social cognition, semantic processing, and visual processing. This study probes both the neural and behavioral mechanisms of learning and cooperation in a naturalistic, interactive and developmental context.

Keywords: communication, cooperation, development, interaction, neuroscience

Procedia PDF Downloads 228
214 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.

Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease

Procedia PDF Downloads 258
213 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery

Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal

Abstract:

Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.

Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT

Procedia PDF Downloads 212