Search results for: pediatric primary care
605 Progressing Institutional Quality Assurance and Accreditation of Higher Education Programmes
Authors: Dominique Parrish
Abstract:
Globally, higher education institutions are responsible for the quality assurance and accreditation of their educational programmes (Courses). The primary purpose of these activities is to ensure that the educational standards of the governing higher education authority are met and the quality of the education provided to students is assured. Despite policies and frameworks being established in many countries, to improve the veracity and accountability of quality assurance and accreditation processes, there are reportedly still mistakes, gaps and deficiencies in these processes. An analysis of Australian universities’ quality assurance and accreditation processes noted that significant improvements were needed in managing these processes and ensuring that review recommendations were implemented. It has also been suggested that the following principles are critical for higher education quality assurance and accreditation to be effective and sustainable: academic standards and performance outcomes must be defined, attainable and monitored; those involved in providing the higher education must assume responsibility for the associated quality assurance and accreditation; potential academic risks must be identified and management solutions developed; and the expectations of the public, governments and students should be considered and incorporated into Course enhancements. This phenomenological study, which was conducted in a Faculty of Science, Medicine and Health in an Australian university, sought to systematically and iteratively develop an effective quality assurance and accreditation process that integrated the evidence-based principles of success and promoted meaningful and sustainable change. Qualitative evaluative feedback was gathered, over a period of eleven months (January - November 2014), from faculty staff engaged in the quality assurance and accreditation of forty-eight undergraduate and postgraduate Courses. Reflexive analysis was used to analyse the data and inform ongoing modifications and developments to the assurance and accreditation process as well as the associated supporting resources. The study resulted in the development of a formal quality assurance and accreditation process together with a suite of targeted resources that were identified as critical for success. The research findings also provided some insights into the institutional enablers that were antecedents to successful quality assurance and accreditation processes as well as meaningful change in the educational practices of academics. While longitudinal data will be collected to further assess the value of the assurance and accreditation process on educational quality, early indicators are that there has been a change in the pedagogical perspectives and activities of academic staff and growing momentum to explore opportunities to further enhance and develop Courses. This presentation will explain the formal quality assurance and accreditation process as well as the component parts, which resulted from this study. The targeted resources that were developed will be described, the pertinent factors that contributed to the success of the process will be discussed and early indicators of sustainable academic change as well as suggestions for future research will be outlined.Keywords: academic standards, quality assurance and accreditation, phenomenological study, process, resources
Procedia PDF Downloads 377604 Influence of Gamma-Radiation Dosimetric Characteristics on the Stability of the Persistent Organic Pollutants
Authors: Tatiana V. Melnikova, Lyudmila P. Polyakova, Alla A. Oudalova
Abstract:
As a result of environmental pollution, the production of agriculture and foodstuffs inevitably contain residual amounts of Persistent Organic Pollutants (POP). The special attention must be given to organic pollutants, including various organochlorinated pesticides (OCP). Among priorities, OCP is DDT (and its metabolite DDE), alfa-HCH, gamma-HCH (lindane). The control of these substances spends proceeding from requirements of sanitary norms and rules. During too time often is lost sight of that the primary product can pass technological processing (in particular irradiation treatment) as a result of which transformation of physicochemical forms of initial polluting substances is possible. The goal of the present work was to study the OCP radiation degradation at a various gamma-radiation dosimetric characteristics. The problems posed for goal achievement: to evaluate the content of the priority of OCPs in food; study the character the degradation of OCP in model solutions (with micro concentrations commensurate with the real content of their agricultural and food products) depending upon dosimetric characteristics of gamma-radiation. Qualitative and quantitative analysis of OCP in food and model solutions by gas chromatograph Varian 3400 (Varian, Inc. (USA)); chromatography-mass spectrometer Varian Saturn 4D (Varian, Inc. (USA)) was carried out. The solutions of DDT, DDE, alpha- and gamma- isomer HCH (0.01, 0.1, 1 ppm) were irradiated on "Issledovatel" (60Co) and "Luch - 1" (60Co) installations at a dose 10 kGy with a variation of dose rate from 0.0083 up to 2.33 kGy/sec. It was established experimentally that OCP residual concentration in individual samples of food products (fish, milk, cereal crops, meat, butter) are evaluated as 10-1-10-4 mg/kg, the value of which depends on the factor-sensations territory and natural migration processes. The results were used in the preparation of model solutions OCP. The dependence of a degradation extent of OCP from a dose rate gamma-irradiation has complex nature. According to our data at a dose 10 kGy, the degradation extent of OCP at first increase passes through a maximum (over the range 0.23 – 0.43 Gy/sec), and then decrease with the magnification of a dose rate. The character of the dependence of a degradation extent of OCP from a dose rate is kept for various OCP, in polar and nonpolar solvents and does not vary at the change of concentration of the initial substance. Also in work conditions of the maximal radiochemical yield of OCP which were observed at having been certain: influence of gamma radiation with a dose 10 kGy, in a range of doses rate 0.23 – 0.43 Gy/sec; concentration initial OCP 1 ppm; use of solvent - 2-propanol after preliminary removal of oxygen. Based on, that at studying model solutions of OCP has been established that the degradation extent of pesticides and qualitative structure of OCP radiolysis products depend on a dose rate, has been decided to continue researches radiochemical transformations OCP into foodstuffs at various of doses rate.Keywords: degradation extent, dosimetric characteristics, gamma-radiation, organochlorinated pesticides, persistent organic pollutants
Procedia PDF Downloads 249603 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study
Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi
Abstract:
Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine
Procedia PDF Downloads 115602 Comparison of Quality of Life One Year after Bariatric Intervention: Systematic Review of the Literature with Bayesian Network Meta-Analysis
Authors: Piotr Tylec, Alicja Dudek, Grzegorz Torbicz, Magdalena Mizera, Natalia Gajewska, Michael Su, Tanawat Vongsurbchart, Tomasz Stefura, Magdalena Pisarska, Mateusz Rubinkiewicz, Piotr Malczak, Piotr Major, Michal Pedziwiatr
Abstract:
Introduction: Quality of life after bariatric surgery is an important factor when evaluating the final result of the treatment. Considering the vast surgical options, we tried to globally compare available methods in terms of quality of following the surgery. The aim of the study is to compare the quality of life a year after bariatric intervention using network meta-analysis methods. Material and Methods: We performed a systematic review according to PRISMA guidelines with Bayesian network meta-analysis. Inclusion criteria were: studies comparing at least two methods of weight loss treatment of which at least one is surgical, assessment of the quality of life one year after surgery by validated questionnaires. Primary outcomes were quality of life one year after bariatric procedure. The following aspects of quality of life were analyzed: physical, emotional, general health, vitality, role physical, social, mental, and bodily pain. All questionnaires were standardized and pooled to a single scale. Lifestyle intervention was considered as a referenced point. Results: An initial reference search yielded 5636 articles. 18 studies were evaluated. In comparison of total score of quality of life, we observed that laparoscopic sleeve gastrectomy (LSG) (median (M): 3.606, Credible Interval 97.5% (CrI): 1.039; 6.191), laparoscopic Roux en-Y gastric by-pass (LRYGB) (M: 4.973, CrI: 2.627; 7.317) and open Roux en-Y gastric by-pass (RYGB) (M: 9.735, CrI: 6.708; 12.760) had better results than other bariatric intervention in relation to lifestyle interventions. In the analysis of the physical aspects of quality of life, we notice better results in LSG (M: 3.348, CrI: 0.548; 6.147) and in LRYGB procedure (M: 5.070, CrI: 2.896; 7.208) than control intervention, and worst results in open RYGB (M: -9.212, CrI: -11.610; -6.844). Analyzing emotional aspects, we found better results than control intervention in LSG, in LRYGB, in open RYGB, and laparoscopic gastric plication. In general health better results were in LSG (M: 9.144, CrI: 4.704; 13.470), in LRYGB (M: 6.451, CrI: 10.240; 13.830) and in single-anastomosis gastric by-pass (M: 8.671, CrI: 1.986; 15.310), and worst results in open RYGB (M: -4.048, CrI: -7.984; -0.305). In social and vital aspects of quality of life, better results were observed in LSG and LRYGB than control intervention. We did not find any differences between bariatric interventions in physical role, mental and bodily aspects of quality of life. Conclusion: The network meta-analysis revealed that better quality of life in total score one year after bariatric interventions were after LSG, LRYGB, open RYGB. In physical and general health aspects worst quality of life was in open RYGB procedure. Other interventions did not significantly affect the quality of life after a year compared to dietary intervention.Keywords: bariatric surgery, network meta-analysis, quality of life, one year follow-up
Procedia PDF Downloads 159601 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 42600 Intracommunity Attitudes Toward the Gatekeeping of Asexuality in the LGBTQ+ Community on Tumblr
Authors: A.D. Fredline, Beverly Stiles
Abstract:
This is a qualitative investigation that examines the social media site, Tumblr, for the goal of analyzing the controversy regarding the inclusion of asexuality in the LGBTQ+ community. As platforms such as Tumblr permit the development of communities for marginalized groups, social media serves as a core component to exclusionary practices and boundary negotiations for community membership. This research is important because there is a paucity of research on the topic and a significant gap in the literature with regards to intracommunity gatekeeping. However, discourse on the topic is blatantly apparent on social media platforms. The objectives are to begin to bridge the gap in the literature by examining attitudes towards the inclusion of asexuality within the LGBTQ+ community. In order to analyze the attitudes developed towards the inclusion of asexuality in the LGBTQ+ community, eight publicly available blogs on Tumblr.com were selected from both the “inclusionist” and “exclusionist” perspectives. Blogs selected were found through a basic search for “inclusionist” and “exclusionist” on the Tumblr website. Out of the first twenty blogs listed for each set of results, those centrally focused on asexuality discourse were selected. For each blog, the fifty most recent postings were collected. Analysis of the collected postings exposed three central themes from the exclusionist perspective as well as for the inclusionist perspective. Findings indicate that from the inclusionist perspective, asexuality belongs to the LGBTQ+ community. One primary argument from this perspective is that asexual individuals face opposition for their identity just as do other identities included in the community. This opposition is said to take a variety of forms, such as verbal shaming, assumption of illness and corrective rape. Another argument is that the LGBTQ+ community and asexuals face a common opponent in cisheterosexism as asexuals struggle with the assumed and expected sexualization. A final central theme is that denying asexual inclusion leads to the assumption of heteronormativity. Findings also indicate that from the exclusionist perspective, asexuality does not belong to the LGBTQ+ community. One central theme from this perspective is the equivalization of cisgender heteroromantic asexuals with cisgender heterosexuals. As straight individuals are not allowed in the community, exclusionists argue that asexuals engaged in opposite gender partnerships should not be included. Another debate is that including asexuality in the community sexualizes all other identities by assuming sexual orientation is inherently sexual rather than romantic. Finally, exclusionists also argue that asexuality encourages childhood labeling and forces sexual identities on children, something not promoted by the LGBTQ+ community. Conclusions drawn from analyzing both perspectives is that integration may be a possibility, but complexities add another layer of discourse. For example, both inclusionists and exclusionists agree that privileged identities do not belong to the LGBTQ+ community. The focus of discourse is whether or not asexuals are privileged. Clearly, both sides of the debate have the same vision of what binds the community together. The question that remains is who belongs to that community.Keywords: asexuality, exclusionists, inclusionists, Tumblr
Procedia PDF Downloads 187599 A System for Preventing Inadvertent Exposition of Staff Present outside the Operating Theater: Description and Clinical Test
Authors: Aya Al Masri, Kamel Guerchouche, Youssef Laynaoui, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: Mobile C-arms move throughout operating rooms of the operating theater. Being designed to move between rooms, they are not equipped with relays to retrieve the exposition information and export it outside the room. Therefore, no light signaling is available outside the room to warn the X-ray emission for staff. Inadvertent exposition of staff outside the operating theater is a real problem for radiation protection. The French standard NFC 15-160 require that: (1) access to any room containing an X-ray emitting device must be controlled by a light signage so that it cannot be inadvertently crossed, and (2) setting up an emergency button to stop the X-ray emission. This study presents a system that we developed to meet these requirements and the results of its clinical test. Materials and methods: The system is composed of two communicating boxes: o The "DetectBox" is to be installed inside the operating theater. It identifies the various operation states of the C-arm by analyzing its power supply signal. The DetectBox communicates (in wireless mode) with the second box (AlertBox). o The "AlertBox" can operate in socket or battery mode and is to be installed outside the operating theater. It detects and reports the state of the C-arm by emitting a real time light signal. This latter can have three different colors: red when the C-arm is emitting X-rays, orange when it is powered on but does not emit X-rays, and green when it is powered off. The two boxes communicate on a radiofrequency link exclusively carried out in the ‘Industrial, Scientific and Medical (ISM)’ frequency bands and allows the coexistence of several on-site warning systems without communication conflicts (interference). Taking into account the complexity of performing electrical works in the operating theater (for reasons of hygiene and continuity of medical care), this system (having a size <10 cm²) works in complete safety without any intrusion in the mobile C-arm and does not require specific electrical installation work. The system is equipped with emergency button that stops X-ray emission. The system has been clinically tested. Results: The clinical test of the system shows that: it detects X-rays having both high and low energy (50 – 150 kVp), high and low photon flow (0.5 – 200 mA: even when emitted for a very short time (<1 ms)), Probability of false detection < 10-5, it operates under all acquisition modes (continuous, pulsed, fluoroscopy mode, image mode, subtraction and movie mode), it is compatible with all C-arm models and brands. We have also tested the communication between the two boxes (DetectBox and AlertBox) in several conditions: (1) Unleaded room, (2) leaded room, and (3) rooms with particular configuration (sas, great distances, concrete walls, 3 mm of lead). The result of these last tests was positive. Conclusion: This system is a reliable tool to alert the staff present outside the operating room for X-ray emission and insure their radiation protection.Keywords: Clinical test, Inadvertent staff exposition, Light signage, Operating theater
Procedia PDF Downloads 126598 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants
Authors: S. Ebadzadsahraei, H. Kazemian
Abstract:
The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs
Procedia PDF Downloads 143597 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management
Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal
Abstract:
Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management
Procedia PDF Downloads 107596 Pibid and Experimentation: A High School Case Study
Authors: Chahad P. Alexandre
Abstract:
PIBID-Institutional Program of Scholarships to Encourage Teaching - is a Brazilian government program that counts today with 48.000 students. It's goal is to motivate the students to stay in the teaching undergraduate programs and to help fill the gap of 100.000 teachers that are needed today in the under graduated schools. The major lack of teachers today is in physics, chemistry, mathematics, and biology. At IFSP-Itapetininga we formatted our physics PIBID based on practical activities. Our students are divided in two São Paulo state government high schools in the same city. The project proposes class activities based on experimentation, observation and understanding of physical phenomena. The didactical experiments are always in relation with the content that the teacher is working, he is the supervisor of the program in the school. Always before an experiment is proposed a little questionnaire to learn about the students preconceptions and one is filled latter to evaluate if now concepts have been created. This procedure is made in order to compare their previous knowledge and how it changed after the experiment is developed. The primary goal of our project is to make the Physics class more attractive to the students and to develop in high school students the interest in learning physics and to show the relation of Physics to the day by day and to the technological world. The objective of the experimental activities is to facilitate the understanding of the concepts that are worked on classes because under experimentation the PIBID scholarship student stimulate the curiosity of the high school student and with this he can develop the capacity to understand and identify the physical phenomena with concrete examples. Knowing how to identify this phenomena and where they are present at the high school student life makes the learning process more significant and pleasant. This proposal make achievable to the students to practice science, to appropriate of complex, in the traditional classes, concepts and overcoming the common preconception that physics is something distant and that is present only on books. This preconception is extremely harmful in the process of scientific knowledge construction. This kind of learning – through experimentation – make the students not only accumulate knowledge but also appropriate it, also to appropriate experimental procedures and even the space that is provided by the school. The PIBID scholarship students, as future teachers also have the opportunity to try experimentation classes, to intervene in the classes and to have contact with their future career. This opportunity allows the students to make important reflection about the practices realized and consequently about the learning methods. Due to this project, we found out that the high school students stay more time focused in the experiment compared to the traditional explanation teachers´ class. As a result in a class, as a participative activity, the students got more involved and participative. We also found out that the physics under graduated students drop out percentage is smaller in our Institute than before the PIBID program started.Keywords: innovation, projects, PIBID, physics, pre-service teacher experiences
Procedia PDF Downloads 341595 Variation of Lexical Choice and Changing Need of Identity Expression
Authors: Thapasya J., Rajesh Kumar
Abstract:
Language plays complex roles in society. The previous studies on language and society explain their interconnected, complementary and complex interactions and, those studies were primarily focused on the variations in the language. Variation being the fundamental nature of languages, the question of personal and social identity navigated through language variation and established that there is an interconnection between language variation and identity. This paper analyses the sociolinguistic variation in language at the lexical level and how the lexical choice of the speaker(s) affects in shaping their identity. It obtains primary data from the lexicon of the Mappila dialect of Malayalam spoken by the members of Mappila (Muslim) community of Kerala. The variation in the lexical choice is analysed by collecting data from the speech samples of 15 minutes from four different age groups of Mappila dialect speakers. Various contexts were analysed and the frequency of borrowed words in each instance is calculated to reach a conclusion on how the variation is happening in the speech community. The paper shows how the lexical choice of the speakers could be socially motivated and involve in shaping and changing identities. Lexical items or vocabulary clearly signal the group identity and personal identity. Mappila dialect of Malayalam was rich in frequent use of borrowed words from Arabic, Persian and Urdu. There was a deliberate attempt to show their identity as a Mappila community member, which was derived from the socio-political situation during those days. This made a clear variation between the Mappila dialect and other dialects of Malayalam at the surface level, which was motivated to create and establish the identity of a person as the member of Mappila community. Historically, these kinds of linguistic variation were highly motivated because of the socio-political factors and, intertwined with the historical facts about the origin and spread of Islamism in the region; people from the Mappila community highly motivated to project their identity as a Mappila because of the social insecurities they had to face before accepting that religion. Thus the deliberate inclusion of Arabic, Persian and Urdu words in their speech helped in showing their identity. However, the socio-political situations and factors at the origin of Mappila community have been changed over a period of time. The social motivation for indicating their identity as a Mappila no longer exist and thus the frequency of borrowed words from Arabic, Persian and Urdu have been reduced from their speech. Apart from the religious terms, the borrowed words from these languages are very few at present. The analysis is carried out by the changes in the language of the people according to their age and found to have significant variations between generations and literacy plays a major role in this variation process. The need of projecting a specific identity of an individual would vary according to the change in the socio-political scenario and a variation in language can shape the identity in order to go with the varying socio-political situation in any language.Keywords: borrowings, dialect, identity, lexical choice, literacy, variation
Procedia PDF Downloads 238594 Recognition of Spelling Problems during the Text in Progress: A Case Study on the Comments Made by Portuguese Students Newly Literate
Authors: E. Calil, L. A. Pereira
Abstract:
The acquisition of orthography is a complex process, involving both lexical and grammatical questions. This learning occurs simultaneously with the domain of multiple textual aspects (e.g.: graphs, punctuation, etc.). However, most of the research on orthographic acquisition focus on this acquisition from an autonomous point of view, separated from the process of textual production. This means that their object of analysis is the production of words selected by the researcher or the requested sentences in an experimental and controlled setting. In addition, the analysis of the Spelling Problems (SP) are identified by the researcher on the sheet of paper. Considering the perspective of Textual Genetics, from an enunciative approach, this study will discuss the SPs recognized by dyads of newly literate students, while they are writing a text collaboratively. Six proposals of textual production were registered, requested by a 2nd year teacher of a Portuguese Primary School between January and March 2015. In our case study we discuss the SPs recognized by the dyad B and L (7 years old). We adopted as a methodological tool the Ramos System audiovisual record. This system allows real-time capture of the text in process and of the face-to-face dialogue between both students and their teacher, and also captures the body movements and facial expressions of the participants during textual production proposals in the classroom. In these ecological conditions of multimodal registration of collaborative writing, we could identify the emergence of SP in two dimensions: i. In the product (finished text): SP identification without recursive graphic marks (without erasures) and the identification of SPs with erasures, indicating the recognition of SP by the student; ii. In the process (text in progress): identification of comments made by students about recognized SPs. Given this, we’ve analyzed the comments on identified SPs during the text in progress. These comments characterize a type of reformulation referred to as Commented Oral Erasure (COE). The COE has two enunciative forms: Simple Comment (SC) such as ' 'X' is written with 'Y' '; or Unfolded Comment (UC), such as ' 'X' is written with 'Y' because...'. The spelling COE may also occur before or during the SP (Early Spelling Recognition - ESR) or after the SP has been entered (Later Spelling Recognition - LSR). There were 631 words entered in the 6 stories written by the B-L dyad, 145 of them containing some type of SP. During the text in progress, the students recognized orally 174 SP, 46 of which were identified in advance (ESRs) and 128 were identified later (LSPs). If we consider that the 88 erasure SPs in the product indicate some form of SP recognition, we can observe that there were twice as many SPs recognized orally. The ESR was characterized by SC when students asked their colleague or teacher how to spell a given word. The LSR presented predominantly UC, verbalizing meta-orthographic arguments, mostly made by L. These results indicate that writing in dyad is an important didactic strategy for the promotion of metalinguistic reflection, favoring the learning of spelling.Keywords: collaborative writing, erasure, learning, metalinguistic awareness, spelling, text production
Procedia PDF Downloads 163593 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 165592 Qualitative Characterization of Proteins in Common and Quality Protein Maize Corn by Mass Spectrometry
Authors: Benito Minjarez, Jesse Haramati, Yury Rodriguez-Yanez, Florencio Recendiz-Hurtado, Juan-Pedro Luna-Arias, Salvador Mena-Munguia
Abstract:
During the last decades, the world has experienced a rapid industrialization and an expanding economy favoring a demographic boom. As a consequence, countries around the world have focused on developing new strategies related to the production of different farm products in order to meet future demands. Consequently, different strategies have been developed seeking to improve the major food products for both humans and livestock. Corn, after wheat and rice, is the third most important crop globally and is the primary food source for both humans and livestock in many regions around the globe. In addition, maize (Zea mays) is an important source of protein accounting for up to 60% of the daily human protein supply. Generally, many of the cereal grains have proteins with relatively low nutritional value, when they are compared with proteins from meat. In the case of corn, much of the protein is found in the endosperm (75 to 85%) and is deficient in two essential amino acids, lysine, and tryptophan. This deficiency results in an imbalance of amino acids and low protein content; normal maize varieties have less than half of the recommended amino acids for human nutrition. In addition, studies have shown that this deficiency has been associated with symptoms of growth impairment, anemia, hypoproteinemia, and fatty liver. Due to the fact that most of the presently available maize varieties do not contain the quality and quantity of proteins necessary for a balanced diet, different countries have focused on the research of quality protein maize (QPM). Researchers have characterized QPM noting that these varieties may contain between 70 to 100% more residues of the amino acids essential for animal and human nutrition, lysine, and tryptophan, than common corn. Several countries in Africa, Latin America, as well as China, have incorporated QPM in their agricultural development plan. Large parts of these countries have chosen a specific QPM variety based on their local needs and climate. Reviews have described the breeding methods of maize and have revealed the lack of studies on genetic and proteomic diversity of proteins in QPM varieties, and their genetic relationships with normal maize varieties. Therefore, molecular marker identification using tools such as mass spectrometry may accelerate the selection of plants that carry the desired proteins with high lysine and tryptophan concentration. To date, QPM maize lines have played a very important role in alleviating the malnutrition, and better characterization of these lines would provide a valuable nutritional enhancement for use in the resource-poor regions of the world. Thus, the objectives of this study were to identify proteins in QPM maize in comparison with a common maize line as a control.Keywords: corn, mass spectrometry, QPM, tryptophan
Procedia PDF Downloads 288591 Functionalization of Sanitary Pads with Probiotic Paste
Authors: O. Sauperl, L. Fras Zemljic
Abstract:
The textile industry is gaining increasing importance in the field of medical materials. Therefore, presented research is focused on textile materials for external (out-of-body) use. Such materials could be various hygienic textile products (diapers, tampons, sanitary napkins, incontinence products, etc.), protective textiles and various hospital linens (surgical covers, masks, gowns, cloths, bed linens, etc.) wound pillows, bandages, orthopedic socks, etc. Function of tampons and sanitary napkins is not only to provide protection during the menstrual cycle, but their function can be also to take care of physiological or pathological vaginal discharge. In general, women's intimate areas are against infection protected by a low pH value of the vaginal flora. High pH inhibits the development of harmful microorganisms, as it is difficult to be reproduced in an acidic environment. The normal vaginal flora in healthy women is highly colonized by lactobacilli. The lactic acid produced by these organisms maintains the constant acidity of the vagina. If the balance of natural protection breaks, infections can occur. In the market, there exist probiotic tampons as a medical product supplying the vagina with beneficial probiotic lactobacilli. But, many users have concerns about the use of tampons due to the possible dry-out of the vagina as well as the possible toxic shock syndrome, which is the reason that they use mainly sanitary napkins during the menstrual cycle. Functionalization of sanitary napkins with probiotics is, therefore, interesting in regard to maintain a healthy vaginal flora and to offer to users added value of the sanitary napkins in the sense of health- and environmentally-friendly products. For this reason, the presented research is oriented in functionalization of the sanitary napkins with the probiotic paste in order to activate the lactic acid bacteria presented in the core of the functionalized sanitary napkin at the time of the contact with the menstrual fluid. In this way, lactobacilli could penetrate into vagina and by maintaining healthy vaginal flora to reduce the risk of vaginal disorders. In regard to the targeted research problem, the influence of probiotic paste applied onto cotton hygienic napkins on selected properties was studied. The aim of the research was to determine whether the sanitary napkins with the applied probiotic paste may assure suitable vaginal pH to maintain a healthy vaginal flora during the use of this product. Together with this, sorption properties of probiotic functionalized sanitary napkins were evaluated and compared to the untreated one. The research itself was carried out on the basis of tracking and controlling the input parameters, currently defined by Slovenian producer (Tosama d.o.o.) as the most important. Successful functionalization of sanitary pads with the probiotic paste was confirmed by ATR-FTIR spectroscopy. Results of the methods used within the presented research show that the absorption of the pads treated with probiotic paste deteriorates compared to non-treated ones. The coating shows a 6-month stability. Functionalization of sanitary pads with probiotic paste is believed to have a commercial potential for lowering the probability of infection during the menstrual cycle.Keywords: functionalization, probiotic paste, sanitary pads, textile materials
Procedia PDF Downloads 191590 The Decision-Making Mechanisms of Tax Regulations
Authors: Nino Pailodze, Malkhaz Sulashvili, Vladimer Kekenadze, Tea Khutsishvili, Irma Makharashvili, Aleksandre Kekenadze
Abstract:
In the nearest future among the important problems which Georgia has solve the most important is economic stability, that bases on fiscal policy and the proper definition of the its directions. The main source of the Budget revenue is the national income. The State uses taxes, loans and emission in order to create national income, were the principal weapon are taxes. As well as fiscal function of the fulfillment of the budget, tax systems successfully implement economic and social development and the regulatory functions of foreign economic relations. A tax is a mandatory, unconditional monetary payment to the budget made by a taxpayer in accordance with this Code, based on the necessary, nonequivalent and gratuitous character of the payment. Taxes shall be national and local. National taxes shall be the taxes provided for under this Code, the payment of which is mandatory across the whole territory of Georgia. Local taxes shall be the taxes provided for under this Code, introduced by normative acts of local self-government representative authorities (within marginal rates), the payment of which is mandatory within the territory of the relevant self-governing unit. National taxes have the leading role in tax systems, but also the local taxes have an importance role in tax systems. Exactly in the means of local taxes, the most part of the budget is formatted. National taxes shall be: income tax, profit tax, value added tax (VAT), excise tax, import duty, property tax shall be a local tax The property tax is one of the significant taxes in Georgia. The paper deals with the taxation mechanism that has been operated in Georgia. The above mention has the great influence in financial accounting. While comparing foreign legislation towards Georgian legislation we discuss the opportunity of using their experience. Also, we suggested recommendations in order to improve the tax system in financial accounting. In addition to accounting, which is regulated according the International Accounting Standards we have tax accounting, which is regulated by the Tax Code, various legal orders / regulations of the Minister of Finance. The rules are controlled by the tax authority, Revenue Service. The tax burden from the tax values are directly related to expenditures of the state from the emergence of the first day. Fiscal policy of the state is as well as expenditure of the state and decisions of taxation. In order to get the best and the most effective mobilization of funds, Government’s primary task is to decide the kind of taxation rules. Tax function is to reveal the substance of the act. Taxes have the following functions: distribution or the fiscal function; Control and regulatory functions. Foreign tax systems evolved in the different economic, political and social conditions influence. The tax systems differ greatly from each other: taxes, their structure, typing means, rates, the different levels of fiscal authority, the tax base, the tax sphere of action, the tax breaks.Keywords: international accounting standards, financial accounting, tax systems, financial obligations
Procedia PDF Downloads 243589 Ant and Spider Diversity in a Rural Landscape of the Vhembe Biosphere, South Africa
Authors: Evans V. Mauda, Stefan H. Foord, Thinandavha C. Munyai
Abstract:
The greatest threat to biodiversity is a loss of habitat through landscape fragmentation and attrition. Land use changes are therefore among the most immediate drivers of species diversity. Urbanization and agriculture are the main drivers of habitat loss and transformation in the Savanna biomes of South Africa. Agricultural expansion and the intensification in particular, take place at the expense of biodiversity and will probably be the primary driver of biodiversity loss in this century. Arthropods show measurable behavioural responses to changing land mosaics at the smallest scale and heterogeneous environments are therefore predicted to support more complex and diverse biological assemblages. Ants are premier soil turners, channelers of energy and dominate insect fauna, while spiders are a mega-diverse group that can regulate other invertebrate populations. This study aims to quantify the response of these two taxa in a rural-urban mosaic of a rapidly developing communal area. The study took place in and around two villages in the north-eastern corner of South Africa. Two replicates for each of the dominant land use categories, viz. urban settlements, dryland cultivation and cattle rangelands, were set out in each of the villages and sampled during the dry and wet seasons for a total of 2 villages × 3 land use categories × 2 seasons = 24 assemblages. Local scale variables measured included vertical and horizontal habitat structure as well as structural and chemical composition of the soil. Ant richness was not affected by land use but local scale variables such as vertical vegetation structure (+) and leaf litter cover (+), although vegetation complexity at lower levels was negatively associated with ant richness. However, ant richness was largely shaped by regional and temporal processes invoking the importance of dispersal and historical processes. Spider species richness was mostly affected by land use and local conditions highlighting their landscape elements. Spider richness did not vary much between villages and across seasons and seems to be less dependent on context or history. There was a considerable amount of variation in spider richness that was not explained and this could be related to factors which were not measured in this study such as temperature and competition. For both ant and spider assemblages the constrained ordination explained 18 % of variation in these taxa. Three environmental variables (leaf litter cover, active carbon and rock cover) were important in explaining ant assemblage structure, while two (sand and leaf litter cover) were important for spider assemblage structure. This study highlights the importance of disturbance (land use activities) and leaf litter with the associated effects on ant and spider assemblages across the study area.Keywords: ants, assemblages, biosphere, diversity, land use, spiders, urbanization
Procedia PDF Downloads 267588 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 189587 Inconsistent Effects of Landscape Heterogeneity on Animal Diversity in an Agricultural Mosaic: A Multi-Scale and Multi-Taxon Investigation
Authors: Chevonne Reynolds, Robert J. Fletcher, Jr, Celine M. Carneiro, Nicole Jennings, Alison Ke, Michael C. LaScaleia, Mbhekeni B. Lukhele, Mnqobi L. Mamba, Muzi D. Sibiya, James D. Austin, Cebisile N. Magagula, Themba’alilahlwa Mahlaba, Ara Monadjem, Samantha M. Wisely, Robert A. McCleery
Abstract:
A key challenge for the developing world is reconciling biodiversity conservation with the growing demand for food. In these regions, agriculture is typically interspersed among other land-uses creating heterogeneous landscapes. A primary hypothesis for promoting biodiversity in agricultural landscapes is the habitat heterogeneity hypothesis. While there is evidence that landscape heterogeneity positively influences biodiversity, the application of this hypothesis is hindered by a need to determine which components of landscape heterogeneity drive these effects and at what spatial scale(s). Additionally, whether diverse taxonomic groups are similarly affected is central for determining the applicability of this hypothesis as a general conservation strategy in agricultural mosaics. Two major components of landscape heterogeneity are compositional and configurational heterogeneity. Disentangling the roles of each component is important for biodiversity conservation because each represents different mechanisms underpinning variation in biodiversity. We identified a priori independent gradients of compositional and configurational landscape heterogeneity within an extensive agricultural mosaic in north-eastern Swaziland. We then tested how bird, dung beetle, ant and meso-carnivore diversity responded to compositional and configurational heterogeneity across six different spatial scales. To determine if a general trend could be observed across multiple taxa, we also tested which component and spatial scale was most influential across all taxonomic groups combined, Compositional, not configurational, heterogeneity explained diversity in each taxonomic group, with the exception of meso-carnivores. Bird and ant diversity was positively correlated with compositional heterogeneity at fine spatial scales < 1000 m, whilst dung beetle diversity was negatively correlated to compositional heterogeneity at broader spatial scales > 1500 m. Importantly, because of these contrasting effects across taxa, there was no effect of either component of heterogeneity on the combined taxonomic diversity at any spatial scale. The contrasting responses across taxonomic groups exemplify the difficulty in implementing effective conservation strategies that meet the requirements of diverse taxa. To promote diverse communities across a range of taxa, conservation strategies must be multi-scaled and may involve different strategies at varying scales to offset the contrasting influences of compositional heterogeneity. A diversity of strategies are likely key to conserving biodiversity in agricultural mosaics, and we have demonstrated that a landscape management strategy that only manages for heterogeneity at one particular scale will likely fall short of management objectives.Keywords: agriculture, biodiversity, composition, configuration, heterogeneity
Procedia PDF Downloads 263586 Cultural Heritage, Urban Planning and the Smart City in Indian Context
Authors: Paritosh Goel
Abstract:
The conservation of historic buildings and historic Centre’s over recent years has become fully encompassed in the planning of built-up areas and their management following climate changes. The approach of the world of restoration, in the Indian context on integrated urban regeneration and its strategic potential for a smarter, more sustainable and socially inclusive urban development introduces, for urban transformations in general (historical centers and otherwise), the theme of sustainability. From this viewpoint, it envisages, as a primary objective, a real “green, ecological or environmental” requalification of the city through interventions within the main categories of sustainability: mobility, energy efficiency, use of sources of renewable energy, urban metabolism (waste, water, territory, etc.) and natural environment. With this the concept of a “resilient city” is also introduced, which can adapt through progressive transformations to situations of change which may not be predictable, behavior that the historical city has always been able to express. Urban planning on the other hand, has increasingly focused on analyses oriented towards the taxonomic description of social/economic and perceptive parameters. It is connected with human behavior, mobility and the characterization of the consumption of resources, in terms of quantity even before quality to inform the city design process, which for ancient fabrics, and mainly affects the public space also in its social dimension. An exact definition of the term “smart city” is still essentially elusive, since we can attribute three dimensions to the term: a) That of a virtual city, evolved based on digital networks and web networks b) That of a physical construction determined by urban planning based on infrastructural innovation, which in the case of historic Centre’s implies regeneration that stimulates and sometimes changes the existing fabric; c) That of a political and social/economic project guided by a dynamic process that provides new behavior and requirements of the city communities that orients the future planning of cities also through participation in their management. This paper is a preliminary research into the connections between these three dimensions applied to the specific case of the fabric of ancient cities with the aim of obtaining a scientific theory and methodology to apply to the regeneration of Indian historical Centre’s. The Smart city scheme if contextualize with heritage of the city it can be an initiative which intends to provide a transdisciplinary approach between various research networks (natural sciences, socio-economics sciences and humanities, technological disciplines, digital infrastructures) which are united in order to improve the design, livability and understanding of urban environment and high historical/cultural performance levels.Keywords: historical cities regeneration, sustainable restoration, urban planning, smart cities, cultural heritage development strategies
Procedia PDF Downloads 281585 Carbon Footprint of Educational Establishments: The Case of the University of Alicante
Authors: Maria R. Mula-Molina, Juan A. Ferriz-Papi
Abstract:
Environmental concerns are increasingly obtaining higher priority in sustainability agenda of educational establishments. This is important not only for its environmental performance in its own right as an organization, but also to present a model for its students. On the other hand, universities play an important role on research and innovative solutions for measuring, analyzing and reducing environmental impacts for different activities. The assessment and decision-making process during the activity of educational establishments is linked to the application of robust indicators. In this way, the carbon footprint is a developing indicator for sustainability that helps understand the direct impact on climate change. But it is not easy to implement. There is a large amount of considering factors involved that increases its complexity, such as different uses at the same time (research, lecturing, administration), different users (students, staff) or different levels of activity (lecturing, exam or holidays periods). The aim of this research is to develop a simplified methodology for calculating and comparing carbon emissions per user at university campus considering two main aspects for carbon accountings: Building operations and transport. Different methodologies applied in other Spanish university campuses are analyzed and compared to obtain a final proposal to be developed in this type of establishments. First, building operation calculation considers the different uses and energy sources consumed. Second, for transport calculation, the different users and working hours are calculated separately, as well as their origin and traveling preferences. For every transport, a different conversion factor is used depending on carbon emissions produced. The final result is obtained as an average of carbon emissions produced per user. A case study is applied to the University of Alicante campus in San Vicente del Raspeig (Spain), where the carbon footprint is calculated. While the building operation consumptions are known per building and month, it does not happen with transport. Only one survey about the habit of transport for users was developed in 2009/2010, so no evolution of results can be shown in this case. Besides, building operations are not split per use, as building services are not monitored separately. These results are analyzed in depth considering all factors and limitations. Besides, they are compared to other estimations in other campuses. Finally, the application of the presented methodology is also studied. The recommendations concluded in this study try to enhance carbon emission monitoring and control. A Carbon Action Plan is then a primary solution to be developed. On the other hand, the application developed in the University of Alicante campus cannot only further enhance the methodology itself, but also render the adoption by other educational establishments more readily possible and yet with a considerable degree of flexibility to cater for their specific requirements.Keywords: building operations, built environment, carbon footprint, climate change, transport
Procedia PDF Downloads 295584 Exploring Artistic Creation and Autoethnography in the Spatial Context of Geography
Authors: Sinem Tas
Abstract:
This research paper attempts to study the perspective of personal experience in relation to spatial dynamics and artistic outcomes within the realm of cultural identity. This article serves as a partial analysis within a broader PhD investigation that focuses on the cultural dynamics and political structures behind cultural identity through an autoethnography of narrative while presenting its correlation with artistic creation in the context of space and people. Focusing on the artistic/creative practice project AUTRUI, the primary goal is to analyse and understand the influence of personal experiences and culturally constructed identity as an artist in resulting in the compositional modality of the last image considering self-reflective experience. Referencing the works of Joyce Davidson and Christine Milligan - the scholars who emphasise the importance of emotion and spatial experience in geographical studies contribute to this work as they highlight the significance of emotion across various spatial scales in their work Embodying Emotion Sensing Space: Introducing Emotional Geographies (2004). Their perspective suggests that understanding emotions within different spatial contexts is crucial for comprehending human experiences and interactions with space. Incorporating the insights of scholars like Yi-Fu Tuan, particularly his seminal work Space and Place: The Perspective of Experience (1979), is important for creating an in-depth frame of geographical experience. Tuan's humanistic perspective on space and place provides a valuable theoretical framework for understanding the interplay between personal experiences and spatial contexts. A substantial contextualisation of the geopolitics of Turkey - the implications for national identity and cohesion - will be addressed by drawing an outline of the political and geographical frame as a methodological strategy to understand the dynamics behind this research. Besides the bibliographical reading, the methods used to study this relation are participatory observation, memory work along with memoir analysis, personal interviews, and discussion of photographs and news. The utilisation of the self as data requires the analysis of the written sources with personal engagement. By delving into written sources such as written communications or diaries as well as memoirs, the research gains a firsthand perspective, enriching the analytical depth of the study. Furthermore, the examination of photography and news articles serves as a valuable means of contextualising experiences from a journalist's background within specific geographical settings. The inclusion of interviews with close family members access provides firsthand perspectives and intimate insights rooted in shared experiences within similar geographical contexts, offering complementary insights and diversified viewpoints, enhancing the comprehensiveness of the investigation.Keywords: art, autoethnography, place and space, Turkey
Procedia PDF Downloads 50583 ATR-IR Study of the Mechanism of Aluminum Chloride Induced Alzheimer Disease - Curative and Protective Effect of Lepidium sativum Water Extract on Hippocampus Rats Brain Tissue
Authors: Maha J. Balgoon, Gehan A. Raouf, Safaa Y. Qusti, Soad S. Ali
Abstract:
The main cause of Alzheimer disease (AD) was believed to be mainly due to the accumulation of free radicals owing to oxidative stress (OS) in brain tissue. The mechanism of the neurotoxicity of Aluminum chloride (AlCl3) induced AD in hippocampus Albino wister rat brain tissue, the curative & the protective effects of Lipidium sativum group (LS) water extract were assessed after 8 weeks by attenuated total reflection spectroscopy ATR-IR and histologically by light microscope. ATR-IR results revealed that the membrane phospholipid undergo free radical attacks, mediated by AlCl3, primary affects the polyunsaturated fatty acids indicated by the increased of the olefinic -C=CH sub-band area around 3012 cm-1 from the curve fitting analysis. The narrowing in the half band width(HBW) of the sνCH2 sub-band around 2852 cm-1 due to Al intoxication indicates the presence of trans form fatty acids rather than gauch rotomer. The degradation of hydrocarbon chain to shorter chain length, increasing in membrane fluidity, disorder and decreasing in lipid polarity in AlCl3 group were indicated by the detected changes in certain calculated area ratios compared to the control. Administration of LS was greatly improved these parameters compared to the AlCl3 group. Al influences the Aβ aggregation and plaque formation, which in turn interferes to and disrupts the membrane structure. The results also showed a marked increase in the β-parallel and antiparallel structure, that characterize the Aβ formation in Al-induced AD hippocampal brain tissue, indicated by the detected increase in both amide I sub-bands around 1674, 1692 cm-1. This drastic increase in Aβ formation was greatly reduced in the curative and protective groups compared to the AlCl3 group and approaches nearly the control values. These results were supported too by the light microscope. AlCl3 group showed significant marked degenerative changes in hippocampal neurons. Most cells appeared small, shrieked and deformed. Interestingly, the administration of LS in curative and protective groups markedly decreases the amount of degenerated cells compared to the non-treated group. Also the intensity of congo red stained cells was decreased. Hippocampal neurons looked more/or less similar to those of control. This study showed a promising therapeutic effect of Lipidium sativum group (LS) on AD rat model that seriously overcome the signs of oxidative stress on membrane lipid and restore the protein misfolding.Keywords: aluminum chloride, alzheimer disease, ATR-IR, Lipidium sativum
Procedia PDF Downloads 367582 From By-product To Brilliance: Transforming Adobe Brick Construction Using Meat Industry Waste-derived Glycoproteins
Authors: Amal Balila, Maria Vahdati
Abstract:
Earth is a green building material with very low embodied energy and almost zero greenhouse gas emissions. However, it lacks strength and durability in its natural state. By responsibly sourcing stabilisers, it's possible to enhance its strength. This research draws inspiration from the robustness of termite mounds, where termites incorporate glycoproteins from their saliva during construction. Biomimicry explores the potential of these termite stabilisers in producing bio-inspired adobe bricks. The meat industry generates significant waste during slaughter, including blood, skin, bones, tendons, gastrointestinal contents, and internal organs. While abundant, many meat by-products raise concerns regarding human consumption, religious orders, cultural and ethical beliefs, and also heavily contribute to environmental pollution. Extracting and utilising proteins from this waste is vital for reducing pollution and increasing profitability. Exploring the untapped potential of meat industry waste, this research investigates how glycoproteins could revolutionize adobe brick construction. Bovine serum albumin (BSA) from cows' blood and mucin from porcine stomachs were the chosen glycoproteins used as stabilisers for adobe brick production. Despite their wide usage across various fields, they have very limited utilisation in food processing. Thus, both were identified as potential stabilisers for adobe brick production in this study. Two soil types were utilised to prepare adobe bricks for testing, comparing controlled unstabilised bricks with glycoprotein-stabilised ones. All bricks underwent testing for unconfined compressive strength and erosion resistance. The primary finding of this study is the efficacy of BSA, a glycoprotein derived from cows' blood and a by-product of the beef industry, as an earth construction stabiliser. Adding 0.5% by weight of BSA resulted in a 17% and 41% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Further, adding 5% by weight of BSA led to a 202% and 97% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Moreover, using 0.1%, 0.2%, and 0.5% by weight of BSA resulted in erosion rate reductions of 30%, 48%, and 70% for British adobe bricks, respectively, with a 97% reduction observed for Sudanese adobe bricks at 0.5% by weight of BSA. However, mucin from the porcine stomach did not significantly improve the unconfined compressive strength of adobe bricks. Nevertheless, employing 0.1% and 0.2% by weight of mucin resulted in erosion rate reductions of 28% and 55% for British adobe bricks, respectively. These findings underscore BSA's efficiency as an earth construction stabiliser for wall construction and mucin's efficacy for wall render, showcasing their potential for sustainable and durable building practices.Keywords: biomimicry, earth construction, industrial waste management, sustainable building materials, termite mounds.
Procedia PDF Downloads 51581 A New Perspective in Cervical Dystonia: Neurocognitive Impairment
Authors: Yesim Sucullu Karadag, Pinar Kurt, Sule Bilen, Nese Subutay Oztekin, Fikri Ak
Abstract:
Background: Primary cervical dystonia is thought to be a purely motor disorder. But recent studies revealed that patients with dystonia had additional non-motor features. Sensory and psychiatric disturbances could be included into the non-motor spectrum of dystonia. The Basal Ganglia receive inputs from all cortical areas and throughout the thalamus project to several cortical areas, thus participating to circuits that have been linked to motor as well as sensory, emotional and cognitive functions. However, there are limited studies indicating cognitive impairment in patients with cervical dystonia. More evidence is required regarding neurocognitive functioning in these patients. Objective: This study is aimed to investigate neurocognitive profile of cervical dystonia patients in comparison to healthy controls (HC) by employing a detailed set of neuropsychological tests in addition to self-reported instruments. Methods: Totally 29 (M/F: 7/22) cervical dystonia patients and 30 HC (M/F: 10/20) were included into the study. Exclusion criteria were depression and not given informed consent. Standard demographic, educational data and clinical reports (disease duration, disability index) were recorded for all patients. After a careful neurological evaluation, all subjects were given a comprehensive battery of neuropsychological tests: Self report of neuropsychological condition (by visual analogue scale-VAS, 0-100), RAVLT, STROOP, PASAT, TMT, SDMT, JLOT, DST, COWAT, ACTT, and FST. Patients and HC were compared regarding demographic, clinical features and neurocognitive tests. Also correlation between disease duration, disability index and self report -VAS were assessed. Results: There was no difference between patients and HCs regarding socio-demographic variables such as age, gender and years of education (p levels were 0.36, 0.436, 0.869; respectively). All of the patients were assessed at the peak of botulinum toxine effect and they were not taking an anticholinergic agent or benzodiazepine. Dystonia patients had significantly impaired verbal learning and memory (RAVLT, p<0.001), divided attention and working memory (ACTT, p<0.001), attention speed (TMT-A and B, p=0.008, 0.050), executive functions (PASAT, p<0.001; SDMT, p= 0.001; FST, p<0.001), verbal attention (DST, p=0.001), verbal fluency (COWAT, p<0.001), visio-spatial processing (JLOT, p<0.001) in comparison to healthy controls. But focused attention (STROOP-spontaneous correction) was not different between two groups (p>0.05). No relationship was found regarding disease duration and disability index with any neurocognitive tests. Conclusions: Our study showed that neurocognitive functions of dystonia patients were worse than control group with the similar age, sex, and education independently clinical expression like disease duration and disability index. This situation may be the result of possible cortical and subcortical changes in dystonia patients. Advanced neuroimaging techniques might be helpful to explain these changes in cervical dystonia patients.Keywords: cervical dystonia, neurocognitive impairment, neuropsychological test, dystonia disability index
Procedia PDF Downloads 420580 An Australian Tertiary Centre Experience of Complex Endovascular Aortic Repairs
Authors: Hansraj Bookun, Rachel Xuan, Angela Tan, Kejia Wang, Animesh Singla, David Kim, Christopher Loupos, Jim Iliopoulos
Abstract:
Introduction: Complex endovascular aortic aneursymal repairs with fenestrated and branched endografts require customised devices to exclude the pathology while reducing morbidity and mortality, which was historically associated with open repair of complex aneurysms. Such endovascular procedures have predominantly been performed in a large volume dedicated tertiary centres. We present here our nine year multidisciplinary experience with this technology in an Australian tertiary centre. Method: This was a cross-sectional, single-centre observational study of 670 patients who had undergone complex endovascular aortic aneurysmal repairs with conventional endografts, fenestrated endografts, and iliac-branched devices from January 2010 to July 2019. Descriptive statistics were used to characterise our sample with regards to demographic and perioperative variables. Homogeneity of the sample was tested using multivariant regression, which did not identify any statistically significant confounding variables. Results: 670 patients of mean age 74, were included (592 males) and the comorbid burden was as follows: ischemic heart disease (55%), diabetes (18%), hypertension (90%), stage four or greater kidney impairment (8%) and current or ex-smoking (78%). The main indications for surgery were elective aneurysms (86%), symptomatic aneurysms (5%), and rupture aneurysms (5%). 106 patients (16%) underwent fenestrated or branched endograft repairs. The mean length of stay was 7.6 days. 2 patients experienced reactionary bleeds, 11 patients had access wound complications (6 lymph fistulae, 5 haematoms), 11 patients had cardiac complications (5 arrhythmias, 3 acute myocadial infarctions, 3 exacerbation of congestive cardiac failure), 10 patients had respiratory complications, 8 patients had renal impairment, 4 patients had gastrointestinal complications, 2 patients suffered from paraplegia, 1 major stroke, 1 minor stroke, and 1 acute brain syndrome. There were 4 vascular occlusions requiring further arterial surgery, 4 type I endoleaks, 4 type II endoleaks, 3 episodes of thromboembolism, and 2 patients who required further arterial operations in the setting of patient vessels. There were 9 unplanned returns to the theatre. Discussion: Our numbers of 10 years suggest that we are not a dedicated high volume centre focusing on aortic repairs. However, we have achieved significantly low complication rates. This can be attributed to our multidisciplinary approach with the intraoperative involvement of skilled interventional radiologists and vascular surgeons as well as postoperative protocols with particular attention to spinal cord protection. Additionally, we have a ratified perioperative pathway that involves multidisciplinary team discussions of patient-related factors and lesion-centered characteristics, which allows for holistic, patient-centered care.Keywords: aneurysm, aortic, endovascular, fenestrated
Procedia PDF Downloads 122579 Effect of Silica Nanoparticles on Three-Point Flexural Properties of Isogrid E-Glass Fiber/Epoxy Composite Structures
Authors: Hamed Khosravi, Reza Eslami-Farsani
Abstract:
Increased interest in lightweight and efficient structural components has created the need for selecting materials with improved mechanical properties. To do so, composite materials are being widely used in many applications, due to durability, high strength and modulus, and low weight. Among the various composite structures, grid-stiffened structures are extensively considered in various aerospace and aircraft applications, because of higher specific strength and stiffness, higher impact resistance, superior load-bearing capacity, easy to repair, and excellent energy absorption capability. Although there are a good number of publications on the design aspects and fabrication of grid structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Therefore, the aim of this research is to study the reinforcing effect of silica nanoparticles on the flexural properties of epoxy/E-glass isogrid panels under three-point bending test. Samples containing 0, 1, 3, and 5 wt.% of the silica nanoparticles, with 44 and 48 vol.% of the glass fibers in the ribs and skin components respectively, were fabricated by using a manual filament winding method. Ultrasonic and mechanical routes were employed to disperse the nanoparticles within the epoxy resin. To fabricate the ribs, the unidirectional fiber rovings were impregnated with the matrix mixture (epoxy + nanoparticles) and then laid up into the grooves of a silicone mold layer-by-layer. At once, four plies of woven fabrics, after impregnating into the same matrix mixture, were layered on the top of the ribs to produce the skin part. In order to conduct the ultimate curing and to achieve the maximum strength, the samples were tested after 7 days of holding at room temperature. According to load-displacement graphs, the bellow trend was observed for all of the samples when loaded from the skin side; following an initial linear region and reaching a load peak, the curve was abruptly dropped and then showed a typical absorbed energy region. It would be worth mentioning that in these structures, a considerable energy absorption was observed after the primary failure related to the load peak. The results showed that the flexural properties of the nanocomposite samples were always higher than those of the nanoparticle-free sample. The maximum enhancement in flexural maximum load and energy absorption was found to be for the incorporation of 3 wt.% of the nanoparticles. Furthermore, the flexural stiffness was continually increased by increasing the silica loading. In conclusion, this study suggested that the addition of nanoparticles is a promising method to improve the flexural properties of grid-stiffened fibrous composite structures.Keywords: grid-stiffened composite structures, nanocomposite, three point flexural test , energy absorption
Procedia PDF Downloads 341578 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 48577 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States
Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss
Abstract:
Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.Keywords: Alzheimer’s disease, budget, dementia, diagnosis.
Procedia PDF Downloads 138576 Production, Characterization and In vitro Evaluation of [223Ra]RaCl2 Nanomicelles for Targeted Alpha Therapy of Osteosarcoma
Authors: Yang Yang, Luciana Magalhães Rebelo Alencar, Martha Sahylí Ortega Pijeira, Beatriz da Silva Batista, Alefe Roger Silva França, Erick Rafael Dias Rates, Ruana Cardoso Lima, Sara Gemini-Piperni, Ralph Santos-Oliveira
Abstract:
Radium-²²³ dichloride ([²²³Rₐ]RₐCl₂) is an alpha particle-emitting radiopharmaceutical currently approved for the treatment of patients with castration-resistant prostate cancer, symptomatic bone metastases, and no known visceral metastatic disease. [²²³Rₐ]RₐCl₂ is bone-seeking calcium mimetic that bonds into the newly formed bone stroma, especially osteoblastic or sclerotic metastases, killing the tumor cells by inducing DNA breaks in a potent and localized manner. Nonetheless, the successful therapy of osteosarcoma as primary bone tumors is still a challenge. Nanomicelles are colloidal nanosystems widely used in drug development to improve blood circulation time, bioavailability, and specificity of therapeutic agents, among other applications. In addition, the enhanced permeability and retention effect of the nanosystems, and the renal excretion of the nanomicelles reported in most cases so far, are very attractive to achieve selective and increased accumulation in tumor site as well as to increase the safety of [²²³Rₐ]RₐCl₂ in the clinical routine. In the present work, [²²³Rₐ]RₐCl₂ nanomicelles were produced, characterized, in vitro evaluated, and compared with pure [²²³Rₐ]RₐCl2 solution using SAOS2 osteosarcoma cells. The [²²³Rₐ]RₐCl₂ nanomicelles were prepared using the amphiphilic copolymer Pluronic F127. The dynamic light scattering analysis of freshly produced [²²³Rₐ]RₐCl₂ nanomicelles demonstrated a mean size of 129.4 nm with a polydispersity index (PDI) of 0.303. After one week stored in the refrigerator, the mean size of the [²²³Rₐ]RₐCl₂ nanomicelles increased to 169.4 with a PDI of 0.381. Atomic force microscopy analysis of [223Rₐ]RₐCl₂ nanomicelles exhibited spherical structures whose heights reach 1 µm, suggesting the filling of 127-Pluronic nanomicelles with [²²³Rₐ]RₐCl₂. The viability assay with [²²³Rₐ]RₐCl₂ nanomicelles displayed a dose-dependent response as it was observed using pure [²²³Rₐ]RₐCl2. However, at the same dose, [²²³Rₐ]RₐCl₂ nanomicelles were 20% higher efficient in killing SAOS2 cells when compared with pure [²²³Rₐ]RₐCl₂. These findings demonstrated the effectiveness of the nanosystem validating the application of nanotechnology in targeted alpha therapy with [²²³Ra]RₐCl₂. In addition, the [²²³Rₐ]RaCl₂nanomicelles may be decorated and incorporated with a great variety of agents and compounds (e.g., monoclonal antibodies, aptamers, peptides) to overcome the limited use of [²²³Ra]RₐCl₂.Keywords: nanomicelles, osteosarcoma, radium dichloride, targeted alpha therapy
Procedia PDF Downloads 117