Search results for: development life cycle
854 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting
Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva
Abstract:
The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.Keywords: dipole antenna, double-band, high efficiency, rectenna
Procedia PDF Downloads 124853 Recent Advances in Research on Carotenoids: From Agrofood Production to Health Outcomes
Authors: Antonio J. Melendez-Martinez
Abstract:
Beyond their role as natural colorants, some carotenoids are provitamins A and may be involved in health-promoting biological actions and contribute to reducing the risk of developing non-communicable diseases, including several types of cancer, cardiovascular disease, eye conditions, skin disorders or metabolic disorders. Given the versatility of carotenoids, the COST-funded European network to advance carotenoid research and applications in agro-food and health (EUROCAROTEN) is aimed at promoting health through the diet and increasing well-being by means. Stakeholders from 38 countries participate in this network, and one of its main objectives is to promote research on little-studied carotenoids. In this contribution, recent advances of our research group and collaborators in the study of two such understudied carotenoids, namely phytoene and phytofluene, the colorless carotenoids, are outlined. The study of these carotenoids is important as they have been largely neglected despite they are present in our diets, fluids, and tissues, and evidence is accumulating that they may be involved in health-promoting actions. More specifically, studies on their levels in diverse tomato and orange varieties were carried out as well as on their potential bioavailability from different dietary sources. Furthermore, the potential effect of these carotenoids on an animal model subjected to oxidative stress was evaluated. The tomatoes were grown in research greenhouses, and some of them were subjected to regulated deficit irrigation, a sustainable agronomic practice. The citrus samples were obtained from an experimental field. The levels of carotenoids were assessed using HPLC according to routine methodologies followed in our lab. Regarding the potential bioavailability (bioaccessibility) studies, different products containing colorless carotenoids, like fruits, juices, were subjected to simulated in vitro digestions, and their incorporation into mixed micelles was assessed. The effect of the carotenoids on oxidative stress was evaluated on the Caenorhabditis elegans model. For that purpose, the worms were subjected to oxidative stress by means of a hydrogen peroxide challenge. In relation to the presence of colorless carotenoids in tomatoes and orange varieties, it was observed that they are widespread in such products and that there are mutants with very high quantities of them, for instance, the Cara Cara or Pinalate mutant oranges. The studies on their bioaccessibility revealed that, in general, phytoene and phytofluene are more bioaccessible than other common dietary carotenoids, probably due to their distinctive chemical structure. About the in vivo antioxidant capacity of phytoene and phytofluene, it was observed that they both exerted antioxidant effects at certain doses. In conclusion, evidence on the importance of phytoene and phytofluene as dietary easily bioavailable and antioxidant carotenoids has been obtained in recent studies from our group, which can be important shortly to innovate in health-promotion through the development of functional foods and related products.Keywords: carotenoids, health, functional foods, nutrition, phytoene, phytofluene
Procedia PDF Downloads 103852 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks
Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos
Abstract:
Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.Keywords: in-house networks, fiber optics, security risk, money
Procedia PDF Downloads 420851 Let’s Work It Out: Effects of a Cooperative Learning Approach on EFL Students’ Motivation and Reading Comprehension
Authors: Shiao-Wei Chu
Abstract:
In order to enhance the ability of their graduates to compete in an increasingly globalized economy, the majority of universities in Taiwan require students to pass Freshman English in order to earn a bachelor's degree. However, many college students show low motivation in English class for several important reasons, including exam-oriented lessons, unengaging classroom activities, a lack of opportunities to use English in authentic contexts, and low levels of confidence in using English. Students’ lack of motivation in English classes is evidenced when students doze off, work on assignments from other classes, or use their phones to chat with others, play video games or watch online shows. Cooperative learning aims to address these problems by encouraging language learners to use the target language to share individual experiences, cooperatively complete tasks, and to build a supportive classroom learning community whereby students take responsibility for one another’s learning. This study includes approximately 50 student participants in a low-proficiency Freshman English class. Each week, participants will work together in groups of between 3 and 4 students to complete various in-class interactive tasks. The instructor will employ a reward system that incentivizes students to be responsible for their own as well as their group mates’ learning. The rewards will be based on points that team members earn through formal assessment scores as well as assessment of their participation in weekly in-class discussions. The instructor will record each team’s week-by-week improvement. Once a team meets or exceeds its own earlier performance, the team’s members will each receive a reward from the instructor. This cooperative learning approach aims to stimulate EFL freshmen’s learning motivation by creating a supportive, low-pressure learning environment that is meant to build learners’ self-confidence. Students will practice all four language skills; however, the present study focuses primarily on the learners’ reading comprehension. Data sources include in-class discussion notes, instructor field notes, one-on-one interviews, students’ midterm and final written reflections, and reading scores. Triangulation is used to determine themes and concerns, and an instructor-colleague analyzes the qualitative data to build interrater reliability. Findings are presented through the researcher’s detailed description. The instructor-researcher has developed this approach in the classroom over several terms, and its apparent success at motivating students inspires this research. The aims of this study are twofold: first, to examine the possible benefits of this cooperative approach in terms of students’ learning outcomes; and second, to help other educators to adapt a more cooperative approach to their classrooms.Keywords: freshman English, cooperative language learning, EFL learners, learning motivation, zone of proximal development
Procedia PDF Downloads 145850 Mindfulness and the Purpose of Being in the Present
Authors: Indujeeva Keerthila Peiris
Abstract:
The secular view of mindfulness has some connotation to the original meaning of mindfulness mentioned in the Theravada Buddhist texts (Pāli Canon), but there is a substantial difference in the meaning of the two. Secular Mindfulness Based Interventions (MBI) focus on stilling the mind, which may provide short-term benefits and help individuals to deal with physical pain, grief, and distress. However, as with many popular educational innovations, the foundational values of mindfulness strategies have been distorted and subverted in a number of instances in which ‘McMindfulness’ programmes have been implemented with a view to reducing mindfulness mediation as a self-help technique that is easily misappropriated for the exclusive pursuit of corporate objectives, employee pacification, and commercial profit. The intention of this paper is not to critique the misappropriations of mindfulness. Instead, to go back to the root source and bring insights from the Buddhist Pāli Canon and its associated teachings on mindfulness in its own terms. In the Buddha’s discourses, as preserved in the Pāli Canon, there is nothing more significant than the understanding and practice of ‘Satipatthãna’. The Satipatthāna Sutta , the ‘Discourse on the Establishment of Mindfulness,’ opens with a proclamation highlighting both the purpose of this training and its methodology. The right practice of mindfulness is the gateway to understanding the Buddha’s teaching. However, although this concept is widely discussed among the Dhamma practitioners, it is the least understood one of them all. The purpose of this paper is to understand deeper meaning of mindfulness as it was originally intended by the Teacher. The natural state of mind is that it wanders. It wanders into the past, the present, and the future. One’s ability to hold attention to a mind object (emotion, thought, feeling, sensation, sense impression) called ‘concentration’. The intentional concentration process does not lead to wisdom. However, the development of wisdom starts when the mind is calm, concentrated, and unified. The practice of insight contemplation aims at gaining a direct understanding of the real nature of phenomena. According to the Buddha’s teaching, there are three basic facts of all existence: 1) impermanence (anicca in Pāli) ; 2) fabrication (also commonly known as suffering, unsatisfactoriness, sankhara or dukka in Pāli); 3) not-self (insubstantiality or impersonality, annatta in Pāli ). The entire Buddhist doctrine is based on these three facts. The problem is our ignorance covers reality. It is not that a person sees the emptiness of them or that we try to see the emptiness of our experience by conceptually thinking that they are empty. It is an experiential outcome that happens when the cause-and- effect overrides the self-view (sakkaya dhitti), and ignorance is known as ignorance and eradicated once and for all. Therefore, the right view (samma dhitti) is the starting point of the path, not ethical conduct (sila) or samadhi (jhana). In order to develop the right view, we need to first listen to the correct Dhamma and possess Yoniso manasikara (right comprehension) to know the five aggregates as five aggregates.Keywords: mindfulness, spirituality, buddhism, pali canon
Procedia PDF Downloads 76849 Examining the Relationship Between Green Procurement Practices and Firm’s Performance in Ghana
Authors: Alexander Otchere Fianko, Clement Yeboah, Evans Oteng
Abstract:
Prior research concludes that Green Procurement Practices positively drive Organisational Performance. Nonetheless, the nexus and conditions under which Green Procurement Practices contribute to a Firm’s Performance are less understood. The purpose of this quantitative relational study was to examine the relationship between Green Procurement Practices and 500 Firms’ Performances in Ghana. The researchers further seek to draw insights from the resource-based view to conceptualize Green Procurement Practices and Environmental Commitment as resource capabilities to enhance Firm Performance. The researchers used insights from the contingent resource-based view to examine Green Leadership Orientation conditions under which Green Procurement Practices contribute to Firm Performance through Environmental Commitment Capabilities. The study’s conceptual framework was tested on primary data from some firms in the Ghanaian market. PROCESS Macro was used to test the study’s hypotheses. Beyond that, Environmental Commitment Capabilities mediated the association between Green Procurement Practices and the Firm’s Performance. The study further seeks to find out whether Green Leadership Orientation positively moderates the indirect relationship between Green Procurement Practices and Firm Performance through Environmental Commitment Capabilities. While conventional wisdom suggests that improved Green Procurement Practices help improve a Firm’s Performance, this study tested this presumed relationship between Green Procurement Practices and Firm Performance and provides theoretical arguments and empirical evidence to justify how Environmental Commitment Capabilities uniquely and in synergy with Green Leadership Orientation transform this relationship. The study results indicated a positive correlation between Green Procurement Practices and Firm Performance. This result suggests that firms that prioritize environmental sustainability and demonstrate a strong commitment to environmentally responsible practices tend to experience better overall performance. This includes financial gains, operational efficiency, enhanced reputation, and improved relationships with stakeholders. The study's findings inform policy formulation in Ghana related to environmental regulations, incentives, and support mechanisms. Policymakers can use the insights to design policies that encourage and reward firms for their Green Procurement Practices, thereby fostering a more sustainable and environmentally responsible business environment. The findings from such research can influence the design and development of educational programs in Ghana, specifically in fields related to sustainability, environmental management, and corporate social responsibility (CSR). Institutions may consider integrating environmental and sustainability topics into their business and management courses to create awareness and promote responsible practices among future business professionals. Also, the study results can also promote the adoption of environmental accounting practices in Ghana. By recognizing and measuring the environmental impacts and costs associated with business activities, firms can better understand the financial implications of their Green Procurement Practices and develop strategies for improved performance.Keywords: environmental commitment, firm’s performance, green procurement practice, green leadership orientation
Procedia PDF Downloads 80848 The Examination of Prospective ICT Teachers’ Attitudes towards Application of Computer Assisted Instruction
Authors: Agâh Tuğrul Korucu, Ismail Fatih Yavuzaslan, Lale Toraman
Abstract:
Nowadays, thanks to development of technology, integration of technology into teaching and learning activities is spreading. Increasing technological literacy which is one of the expected competencies for individuals of 21st century is associated with the effective use of technology in education. The most important factor in effective use of technology in education institutions is ICT teachers. The concept of computer assisted instruction (CAI) refers to the utilization of information and communication technology as a tool aided teachers in order to make education more efficient and improve its quality in the process of educational. Teachers can use computers in different places and times according to owned hardware and software facilities and characteristics of the subject and student in CAI. Analyzing teachers’ use of computers in education is significant because teachers are the ones who manage the course and they are the most important element in comprehending the topic by students. To accomplish computer-assisted instruction efficiently is possible through having positive attitude of teachers. Determination the level of knowledge, attitude and behavior of teachers who get the professional knowledge from educational faculties and elimination of deficiencies if any are crucial when teachers are at the faculty. Therefore, the aim of this paper is to identify ICT teachers' attitudes toward computer-assisted instruction in terms of different variables. Research group consists of 200 prospective ICT teachers studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education CEIT department. As data collection tool of the study; “personal information form” developed by the researchers and used to collect demographic data and "the attitude scale related to computer-assisted instruction" are used. The scale consists of 20 items. 10 of these items show positive feature, while 10 of them show negative feature. The Kaiser-Meyer-Olkin (KMO) coefficient of the scale is found 0.88 and Barlett test significance value is found 0.000. The Cronbach’s alpha reliability coefficient of the scale is found 0.93. In order to analyze the data collected by data collection tools computer-based statistical software package used; statistical techniques such as descriptive statistics, t-test, and analysis of variance are utilized. It is determined that the attitudes of prospective instructors towards computers do not differ according to their educational branches. On the other hand, the attitudes of prospective instructors who own computers towards computer-supported education are determined higher than those of the prospective instructors who do not own computers. It is established that the departments of students who previously received computer lessons do not affect this situation so much. The result is that; the computer experience affects the attitude point regarding the computer-supported education positively.Keywords: computer based instruction, teacher candidate, attitude, technology based instruction, information and communication technologies
Procedia PDF Downloads 295847 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 139846 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period
Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer
Abstract:
Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.Keywords: lymphoedema, management strategies, pregnancy, qualitative
Procedia PDF Downloads 85845 Identifying the Barriers to Institutionalizing a One Health Concept in Responding to Zoonotic Diseases in South Asia
Authors: Rojan Dahal
Abstract:
One Health refers to a collaborative effort between multiple disciplines - locally, nationally, and globally - to attain optimal health. Although there were unprecedented intersectoral alliances between the animal and human health sectors during the avian influenza outbreak, there are different views and perceptions concerning institutionalizing One Health in South Asia. It is likely a structural barrier between the relevant professionals working in different entities or ministries when it comes to collaborating on One Health actions regarding zoonotic diseases. Politicians and the public will likely need to invest large amounts of money, demonstrate political will, and understand how One Health works to overcome these barriers. One Health might be hard to invest in South Asian countries, where the benefits are based primarily on models and projections and where numerous issues related to development and health need urgent attention. The other potential barrier to enabling the One Health concept in responding to zoonotic diseases is a failure to represent One Health in zoonotic disease control and prevention measures in the national health policy, which is a critical component of institutionalizing the One Health concept. One Health cannot be institutionalized without acknowledging the linkages between animal, human, and environmental sectors in dealing with zoonotic diseases. Efforts have been made in the past to prepare a preparedness plan for One Health implementation, but little has been done to establish a policy environment to institutionalize One Health. It is often assumed that health policy refers specifically to medical care issues and health care services. When drafting, reviewing, and redrafting the policy, it is important to engage a wide range of stakeholders. One Health institutionalization may also be hindered by the interplay between One Health professionals and bureaucratic inertia in defining the priorities of diseases due to competing interests on limited budgets. There is a possibility that policymakers do not recognize the importance of veterinary professionals in preventing human diseases originating in animals. Compared to veterinary medicine, the human health sector has produced most of the investment and research outputs related to zoonotic diseases. The public health profession may consider itself superior to the veterinary profession. Zoonotic diseases might not be recognized as threats to human health, impeding integrated policies. The effort of One Health institutionalization remained only among the donor agencies and multi-sectoral organizations. There is a need for strong political will and state capacity to overcome the existing institutional, financial, and professional barriers for its effective implementation. There is a need to assess the structural challenges, policy challenges, and the attitude of the professional working in the multiple disciplines related to One Health. Limited research has been conducted to identify the reasons behind the barriers to institutionalizing the One Health concept in South Asia. Institutionalizing One Health in responding to zoonotic diseases breaks down silos and integrates animals, humans, and the environment.Keywords: one health, institutionalization, South Asia, institutionalizations
Procedia PDF Downloads 98844 Magnetic SF (Silk Fibroin) E-Gel Scaffolds Containing bFGF-Conjugated Fe3O4 Nanoparticles
Authors: Z. Karahaliloğlu, E. Yalçın, M. Demirbilek, E.B. Denkbaş
Abstract:
Critical-sized bone defects caused by trauma, bone diseases, prosthetic implant revision or tumor excision cannot be repaired by physiological regenerative processes. Current orthopedic applications for critical-sized bone defects are to use autologous bone grafts, bone allografts, or synthetic graft materials. However, these strategies are unable to solve completely the problem, and motivate the development of novel effective biological scaffolds for tissue engineering applications and regenerative medicine applications. In particular, scaffolds combined with a variety of bio-agents as fundamental tools emerge to provide the regeneration of damaged bone tissues due to their ability to promote cell growth and function. In this study, a magnetic silk fibroin (SF) hydrogel scaffold was prepared by electrogelation process of the concentrated Bombxy mori silk fibroin (8 %wt) aqueous solution. For enhancement of osteoblast-like cells (SaOS-2) growth and adhesion, basal fibroblast growth factor (bFGF) were conjugated physically to the HSA-coated magnetic nanoparticles (Fe3O4) and magnetic SF e-gel scaffolds were prepared by incorporation of Fe3O4, HSA (human serum albumin)=Fe3O4 and HSA=Fe3O4-bFGF nanoparticles. HSA=Fe3O4, HSA=Fe3O4-bFGF loaded and bare SF e-gels scaffolds were characterized using scanning electron microscopy (SEM.) For cell studies, human osteoblast-like cell line (SaOS-2) was used and an MTT assay was used to assess the cytotoxicity of magnetic silk fibroin e-gel scaffolds and cell density on these surfaces. For the evaluation osteogenic activation, ALP (alkaline phosphatase), the amount of mineralized calcium, total protein and collagen were studied. Fe3O4 nanoparticles were successfully synthesized and bFGF was conjugated to HSA=Fe3O4 nanoparticles with %97.5 of binding yield which has a particle size of 71.52±2.3 nm. Electron microscopy images of the prepared HSA and bFGF incorporated SF e-gel scaffolds showed a 3D porous morphology. In terms of water uptake results, bFGF conjugated HSA=Fe3O4 nanoparticles has the best water absorbability behavior among all groups. In the in-vitro cell culture studies realized using SaOS-2 cell line, the coating of Fe3O4 nanoparticles surface with a protein enhance the cell viability and HSA coating and bFGF conjugation, the both have an inductive effect in the cell proliferation. One of the markers of bone formation and osteoblast differentiation, according to the ALP activity and total protein results, HSA=Fe3O4-bFGF loaded SF e-gels had significantly enhanced ALP activity. Osteoblast cultured HSA=Fe3O4-bFGF loaded SF e-gels deposited more calcium compared with SF e-gel. The proposed magnetic scaffolds seem to be promising for bone tissue regeneration and used in future work for various applications.Keywords: basic fibroblast growth factor (bFGF), e-gel, iron oxide nanoparticles, silk fibroin
Procedia PDF Downloads 288843 Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring
Authors: Fong-Zuo Lee, Jihn-Sung Lai, Yung-Bin Lin, Xiaoqin Liu, Kuo-Chun Chang, Zhi-Xian Yang, Wen-Dar Guo, Jian-Hao Hong
Abstract:
In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.Keywords: flash flood, river bed degradation, bridge pier scouring, a friendly operational system
Procedia PDF Downloads 191842 Valorization of Banana Peels for Mercury Removal in Environmental Realist Conditions
Authors: E. Fabre, C. Vale, E. Pereira, C. M. Silva
Abstract:
Introduction: Mercury is one of the most troublesome toxic metals responsible for the contamination of the aquatic systems due to its accumulation and bioamplification along the food chain. The 2030 agenda for sustainable development of United Nations promotes the improving of water quality by reducing water pollution and foments an enhance in wastewater treatment, encouraging their recycling and safe water reuse globally. Sorption processes are widely used in wastewater treatments due to their many advantages such as high efficiency and low operational costs. In these processes the target contaminant is removed from the solution by a solid sorbent. The more selective and low cost is the biosorbent the more attractive becomes the process. Agricultural wastes are especially attractive approaches for sorption. They are largely available, have no commercial value and require little or no processing. In this work, banana peels were tested for mercury removal from low concentrated solutions. In order to investigate the applicability of this solid, six water matrices were used increasing the complexity from natural waters to a real wastewater. Studies of kinetics and equilibrium were also performed using the most known models to evaluate the viability of the process In line with the concept of circular economy, this study adds value to this by-product as well as contributes to liquid waste management. Experimental: The solutions were prepared with Hg(II) initial concentration of 50 µg L-1 in natural waters, at 22 ± 1 ºC, pH 6, magnetically stirring at 650 rpm and biosorbent mass of 0.5 g L-1. NaCl was added to obtain the salt solutions, seawater was collected from the Portuguese coast and the real wastewater was kindly provided by ISQ - Instituto de Soldadura e qualidade (Welding and Quality Institute) and diluted until the same concentration of 50 µg L-1. Banana peels were previously freeze-drying, milled, sieved and the particles < 1 mm were used. Results: Banana peels removed more than 90% of Hg(II) from all the synthetic solutions studied. In these cases, the enhance in the complexity of the water type promoted a higher mercury removal. In salt waters, the biosorbent showed removals of 96%, 95% and 98 % for 3, 15 and 30 g L-1 of NaCl, respectively. The residual concentration of Hg(II) in solution achieved the level of drinking water regulation (1 µg L-1). For real matrices, the lower Hg(II) elimination (93 % for seawater and 81 % for the real wastewaters), can be explained by the competition between the Hg(II) ions and the other elements present in these solutions for the sorption sites. Regarding the equilibrium study, the experimental data are better described by the Freundlich isotherm (R ^ 2=0.991). The Elovich equation provided the best fit to the kinetic points. Conclusions: The results exhibited the great ability of the banana peels to remove mercury. The environmental realist conditions studied in this work, highlight their potential usage as biosorbents in water remediation processes.Keywords: banana peels, mercury removal, sorption, water treatment
Procedia PDF Downloads 155841 The Expansion of Buddhism from India to Nepal Himalaya and Beyond
Authors: Umesh Regmi
Abstract:
This paper explores the expansion of Buddhism from India geographically to the Himalayan region of Nepal, Tibet, India, and Bhutan in chronological historical sequence. The Buddhism practiced in Tibet is the spread of the Mahayana-Vajrayana form appropriately designed by Indian Mahasiddhas, who were the practitioners of the highest form of tantra and meditation. Vajrayana Buddhism roots in the esoteric practices incorporating the teachings of Buddha, mantras, dharanis, rituals, and sadhana for attaining enlightenment. This form of Buddhism spread from India to Nepal after the 5th Century AD and Tibet after the 7th century AD and made a return journey to the Himalayan region of Nepal, India, and Bhutan after the 8th century. The first diffusion of this form of Buddhism from India to Nepal and Tibet is partially proven through Buddhist texts and the archaeological existence of monasteries historically and at times relied on mythological traditions. The second diffusion of Buddhism in Tibet was institutionalized through the textual translations and interpretations of Indian Buddhist masters and their Tibetan disciples and the establishment of different monasteries in various parts of Tibet, later resulting in different schools and their traditions: Nyingma, Kagyu, Sakya, Gelug, and their sub-schools. The first return journey of Buddhism from Tibet to the Himalayan region of Nepal, India, and Bhutan in the 8th century is mythologically recorded in local legends of the arrival of Padmasambhava, and the second journey in the 11th century and afterward flourished by many Indian masters who practiced continuously till date. This return journey of Tibetan Buddhism has been intensified after 1959 with the Chinese occupation of Tibet, resulting in the Tibetan Buddhist masters living in exile in major locations like Kathmandu, Dharmasala, Dehradun, Sikkim, Kalimpong, and beyond. The historic-cultural-critical methodology for the recognition of the qualities of cultural expressions analysis presents the Buddhist practices of the Himalayan region, explaining the concepts of Ri (mountain as spiritual symbols), yul-lha (village deities), dhar-lha (spiritual concept of mountain passes), dharchhog-lungdhar (prayer flags), rig-sum gonpo (small stupas), Chenresig, asura (demi gods), etc. Tibetan Buddhist history has preserved important textual and practical aspects of Vajrayana from Buddhism historically in the form of arrival, advent, and development, including rising and fall. Currently, Tibetan Buddhism has influenced a great deal in the contemporary Buddhist practices of the world. The exploratory findings conducted over seven years of field visits and research in the Himalayan regions of Nepal, India, and Bhutan have demonstrated the fact that Buddhism in the Himalayan region is a return journey from Tibet and lately been popularized globally after 1959 by major monasteries and their Buddhist masters, lamas, nuns and other professionals, who have contributed in different periods of time.Keywords: Buddhism, expansion, Himalayan region, India, Nepal, Bhutan, return, Tibet, Vajrayana Buddhism
Procedia PDF Downloads 108840 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions
Authors: Pirta Palola, Richard Bailey, Lisa Wedding
Abstract:
Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.Keywords: economics of biodiversity, environmental valuation, natural capital, value function
Procedia PDF Downloads 194839 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State
Authors: Valentina Bonetti
Abstract:
Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.Keywords: exergy, reference state, dynamic, building
Procedia PDF Downloads 226838 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction
Authors: Sri Sai Ramya Bojedla, Falguni Pati
Abstract:
Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers
Procedia PDF Downloads 197837 Sustainability of the Built Environment of Ranchi District
Authors: Vaidehi Raipat
Abstract:
A city is an expression of coexistence between its users and built environment. The way in which its spaces are animated signify the quality of this coexistence. Urban sustainability is the ability of a city to respond efficiently towards its people, culture, environment, visual image, history, visions and identity. The quality of built environment determines the quality of our lifestyles, but poor ability of the built environment to adapt and sustain itself through the changes leads to degradation of cities. Ranchi was created in November 2000, as the capital of the newly formed state Jharkhand, located on eastern side of India. Before this Ranchi was known as summer capital of Bihar and was a little larger than a town in terms of development. But since then it has been vigorously expanding in size, infrastructure as well as population. This sudden expansion has created a stress on existing built environment. The large forest covers, agricultural land, diverse culture and pleasant climatic conditions have degraded and decreased to a large extent. Narrow roads and old buildings are unable to bear the load of the changing requirements, fast improving technology and growing population. The built environment has hence been rendered unsustainable and unadaptable through fastidious changes of present era. Some of the common hazards that can be easily spotted in the built environment are half-finished built forms, pedestrians and vehicles moving on the same part of the road. Unpaved areas on street edges. Over-sized, bright and randomly placed hoardings. Negligible trees or green spaces. The old buildings have been poorly maintained and the new ones are being constructed over them. Roads are too narrow to cater to the increasing traffic, both pedestrian and vehicular. The streets have a large variety of activities taking place on them, but haphazardly. Trees are being cut down for road widening and new constructions. There is no space for greenery in the commercial as well as old residential areas. The old infrastructure is deteriorating because of poor maintenance and the economic limitations. Pseudo understanding of functionality as well as aesthetics drive the new infrastructure. It is hence necessary to evaluate the extent of sustainability of existing built environment of the city and create or regenerate the existing built environment into a more sustainable and adaptable one. For this purpose, research titled “Sustainability of the Built Environment of Ranchi District” has been carried out. In this research the condition of the built environment of Ranchi are explored so as to figure out the problems and shortcomings existing in the city and provide for design strategies that can make the existing built-environment sustainable. The built environment of Ranchi that include its outdoor spaces like streets, parks, other open areas, its built forms as well as its users, has been analyzed in terms of various urban design parameters. Based on which strategies have been suggested to make the city environmentally, socially, culturally and economically sustainable.Keywords: adaptable, built-environment, sustainability, urban
Procedia PDF Downloads 237836 High Strain Rate Behavior of Harmonic Structure Designed Pure Nickel: Mechanical Characterization Microstructure Analysis and 3D Modelisation
Authors: D. Varadaradjou, H. Kebir, J. Mespoulet, D. Tingaud, S. Bouvier, P. Deconick, K. Ameyama, G. Dirras
Abstract:
The development of new architecture metallic alloys with controlled microstructures is one of the strategic ways for designing materials with high innovation potential and, particularly, with improved mechanical properties as required for structural materials. Indeed, unlike conventional counterparts, metallic materials having so-called harmonic structure displays strength and ductility synergy. The latter occurs due to a unique microstructure design: a coarse grain structure surrounded by a 3D continuous network of ultra-fine grain known as “core” and “shell,” respectively. In the present study, pure harmonic-structured (HS) Nickel samples were processed via controlled mechanical milling and followed by spark plasma sintering (SPS). The present work aims at characterizing the mechanical properties of HS pure Nickel under room temperature dynamic loading through a Split Hopkinson Pressure Bar (SHPB) test and the underlying microstructure evolution. A stopper ring was used to maintain the strain at a fixed value of about 20%. Five samples (named B1 to B5) were impacted using different striker bar velocities from 14 m/s to 28 m/s, yielding strain rate in the range 4000-7000 s-1. Results were considered until a 10% deformation value, which is the deformation threshold for the constant strain rate assumption. The non-deformed (INIT – post-SPS process) and post-SHPB microstructure (B1 to B5) were investigated by EBSD. It was observed that while the strain rate is increased, the average grain size within the core decreases. An in-depth analysis of grains and grain boundaries was made to highlight the thermal (such as dynamic recrystallization) or mechanical (such as grains fragmentation by dislocation) contribution within the “core” and “shell.” One of the most widely used methods for determining the dynamic behavior of materials is the SHPB technique developed by Kolsky. A 3D simulation of the SHPB test was created through ABAQUS in dynamic explicit. This 3D simulation allows taking into account all modes of vibration. An inverse approach was used to identify the material parameters from the equation of Johnson-Cook (JC) by minimizing the difference between the numerical and experimental data. The JC’s parameters were identified using B1 and B5 samples configurations. Predictively, identified parameters of JC’s equation shows good result for the other sample configuration. Furthermore, mean rise of temperature within the harmonic Nickel sample can be obtained through ABAQUS and show an elevation of about 35°C for all fives samples. At this temperature, a thermal mechanism cannot be activated. Therefore, grains fragmentation within the core is mainly due to mechanical phenomena for a fixed final strain of 20%.Keywords: 3D simulation, fragmentation, harmonic structure, high strain rate, Johnson-cook model, microstructure
Procedia PDF Downloads 231835 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction
Authors: Maria Angela Bedini, Fabio Bronzini
Abstract:
The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy
Procedia PDF Downloads 127834 Pueblos Mágicos in Mexico: The Loss of Intangible Cultural Heritage and Cultural Tourism
Authors: Claudia Rodriguez-Espinosa, Erika Elizabeth Pérez Múzquiz
Abstract:
Since the creation of the “Pueblos Mágicos” program in 2001, a series of social and cultural events had directly affected the heritage conservation of the 121 registered localities until 2018, when the federal government terminated the program. Many studies have been carried out that seek to analyze from different perspectives and disciplines the consequences that these appointments have generated in the “Pueblos Mágicos.” Multidisciplinary groups such as the one headed by Carmen Valverde and Liliana López Levi, have brought together specialists from all over the Mexican Republic to create a set of diagnoses of most of these settlements, and although each one has unique specificities, there is a constant in most of them that has to do with the loss of cultural heritage and that is related to transculturality. There are several factors identified that have fostered a cultural loss, as a direct reflection of the economic crisis that prevails in Mexico. It is important to remember that the origin of this program had as its main objective to promote the growth and development of local economies since one of the conditions for entering the program is that they have less than 20,000 inhabitants. With this goal in mind, one of the first actions that many “Pueblos Mágicos” carried out was to improve or create an infrastructure to receive both national and foreign tourists since this was practically non-existent. Creating hotels, restaurants, cafes, training certified tour guides, among other actions, have led to one of the great problems they face: globalization. Although by itself it is not bad, its impact in many cases has been negative for heritage conservation. The entry into and contact with new cultures has led to the undervaluation of cultural traditions, their transformation and even their total loss. This work seeks to present specific cases of transformation and loss of cultural heritage, as well as to reflect on the problem and propose scenarios in which the negative effects can be reversed. For this text, 36 “Pueblos Mágicos” have been selected for study, based on those settlements that are cited in volumes I and IV (the first and last of the collection) of the series produced by the multidisciplinary group led by Carmen Valverde and Liliana López Levi (researchers from UNAM and UAM Xochimilco respectively) in the project supported by CONACyT entitled “Pueblos Mágicos. An interdisciplinary vision”, of which we are part. This sample is considered representative since it forms 30% of the total of 121 “Pueblos Mágicos” existing at that moment. With this information, the elements of its intangible heritage loss or transformation have been identified in every chapter based on the texts written by the participants of that project. Finally, this text shows an analysis of the effects that this federal program, as a public policy applied to 132 populations, has had on the conservation or transformation of the intangible cultural heritage of the “Pueblos Mágicos.” Transculturality, globalization, the creation of identities and the desire to increase the flow of tourists have impacted the changes that traditions (main intangible cultural heritage) have had in the 18 years that the federal program lasted.Keywords: public policies, cultural tourism, heritage preservation, pueblos mágicos program
Procedia PDF Downloads 189833 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 113832 Survey of the Literacy by Radio Project as an Innovation in Literacy Promotion in Nigeria
Authors: Stella Chioma Nwizu
Abstract:
The National Commission for Adult and Non Formal Education (NMEC) in Nigeria is charged with the reduction of illiteracy rate through the development, monitoring, and supervision of literacy programmes in Nigeria. In spite of various efforts by NMEC to reduce illiteracy, literature still shows that the illiteracy rate is still high. According to NMEC/UNICEF, about 60 million Nigerians are non-literate, and nearly two thirds of them are women. This situation forced the government to search for innovative and better approaches to literacy promotion and delivery. The literacy by radio project was adopted as an innovative intervention to literacy delivery in Nigeria because the radio is the cheapest and most easily affordable medium for non-literates. The project aimed at widening access to literacy programmes for the non-literate marginalized and disadvantaged groups in Nigeria by taking literacy programmes to their door steps. The literacy by radio has worked perfectly well in non-literacy reduction in Cuba. This innovative intervention of literacy by radio is anchored on the diffusion of innovation theory by Rogers. The literacy by radio has been going on for fifteen years and the efficacy and contributions of this innovation need to be investigated. Thus, the purpose of this research is to review the contributions of the literacy by radio in Nigeria. The researcher adopted the survey research design for the study. The population for the study consisted of 2,706 participants and 47 facilitators of the literacy by radio programme in the 10 pilot states in Nigeria. A sample of four states made up of 302 participants and eight facilitators were used for the study. Information was collected through Focus Group Discussion (FGD), interviews and content analysis of official documents. The data were analysed qualitatively to review the contributions of literacy by radio project and determine the efficacy of this innovative approach in facilitating literacy in Nigeria. Results from the field experience showed, among others, that more non-literates have better access to literacy programmes through this innovative approach. The pilot project was 88% successful; not less than 2,110 adults were made literate through the literacy by radio project in 2017. However, lack of enthusiasm and commitment on the part of the technical committee and facilitators due to non-payment of honorarium, poor signals from radio stations, interruption of lectures with adverts, low community involvement in decision making in the project are challenges to the success rate of the project. The researcher acknowledges the need to customize all materials and broadcasts in all the dialects of the participants and the inclusion of more civil rights, environmental protection and agricultural skills into the project. The study recommends among others, improved and timely funding of the project by the Federal Government to enable NMEC to fulfill her obligations towards the greater success of the programme, setting up of independent radio stations for airing the programmes and proper monitoring and evaluation of the project by NMEC and State Agencies for greater effectiveness. In an era of the knowledge-driven economy, no one should be allowed to get saddled with the weight of illiteracy.Keywords: innovative approach, literacy, project, radio, survey
Procedia PDF Downloads 65831 Waste Analysis and Classification Study (WACS) in Ecotourism Sites of Samal Island, Philippines Towards a Circular Economy Perspective
Authors: Reeden Bicomong
Abstract:
Ecotourism activities, though geared towards conservation efforts, still put pressures against the natural state of the environment. Influx of visitors that goes beyond carrying capacity of the ecotourism site, the wastes generated, greenhouse gas emissions, are just few of the potential negative impacts of a not well-managed ecotourism activities. According to Girard and Nocca (2017) tourism produces many negative impacts because it is configured according to the model of linear economy, operating on a linear model of take, make and dispose (Ellen MacArthur Foundation 2015). With the influx of tourists in an ecotourism area, more wastes are generated, and if unregulated, natural state of the environment will be at risk. It is in this light that a study on waste analysis and classification study in five different ecotourism sites of Samal Island, Philippines was conducted. The major objective of the study was to analyze the amount and content of wastes generated from ecotourism sites in Samal Island, Philippines and make recommendations based on the circular economy perspective. Five ecotourism sites in Samal Island, Philippines was identified such as Hagimit Falls, Sanipaan Vanishing Shoal, Taklobo Giant Clams, Monfort Bat Cave, and Tagbaobo Community Based Ecotourism. Ocular inspection of each ecotourism site was conducted. Likewise, key informant interview of ecotourism operators and staff was done. Wastes generated from these ecotourism sites were analyzed and characterized to come up with recommendations that are based on the concept of circular economy. Wastes generated were classified into biodegradables, recyclables, residuals and special wastes. Regression analysis was conducted to determine if increase in number of visitors would equate to increase in the amount of wastes generated. Ocular inspection indicated that all of the five ecotourism sites have their own system of waste collection. All of the sites inspected were found to be conducting waste separation at source since there are different types of garbage bins for all of the four classification of wastes such as biodegradables, recyclables, residuals and special wastes. Furthermore, all five ecotourism sites practice composting of biodegradable wastes and recycling of recyclables. Therefore, only residuals are being collected by the municipal waste collectors. Key informant interview revealed that all five ecotourism sites offer mostly nature based activities such as swimming, diving, site seeing, bat watching, rice farming experiences and community living. Among the five ecotourism sites, Sanipaan Vanishing Shoal has the highest average number of visitors in a weekly basis. At the same time, in the wastes assessment study conducted, Sanipaan has the highest amount of wastes generated. Further results of wastes analysis revealed that biodegradables constitute majority of the wastes generated in all of the five selected ecotourism sites. Meanwhile, special wastes proved to be the least generated as there was no amount of this type was observed during the three consecutive weeks WACS was conducted.Keywords: Circular economy, ecotourism, sustainable development, WACS
Procedia PDF Downloads 220830 Mental Health on Three Continents: A Comparison of Mental Health Disorders in the Usa, India and Brazil
Authors: Henry Venter, Murali Thyloth, Alceu Casseb
Abstract:
Historically, mental and substance use disorders were not a global health priority. Since the 1993 World Development Report, the importance of the contribution of mental health and substance abuse on the relative global burden associated with disease morbidity has been recognized with 300 million people worldwide suffering from depression alone. This led to an international effort to improve the mental health of populations around the world. Despite these efforts some countries remain at the top of the list of countries with the highest rate of mental illness. Important research questions were asked: Would there be commonalities regarding mental health between these countries; would there be common factors leading to the high prevalence of mental illness; and how prepared are these countries with mental health delivery? Findings from this research can aid organizations and institutions preparing mental health service providers to focus training and preparation to address specific needs revealed by the study. Methods: Researchers decided to compare three distinctly different countries at the top of the list of countries with the highest rate of mental illness, the USA, India and Brazil, situated on three different continents with different economies and lifestyles. Data were collected using archival research methodology, reviewing records and findings of international and national health and mental health studies to subtract and compare data and findings. Results: The findings indicated that India is the most depressed country in the world, followed by the USA (and China) with Brazil in Latin America with the greatest number of depressed individuals. By 2020 roughly 20% of India, acountry of over one billion citizens, will suffer from some form of mental illnees, yet there are less than 4,000 experts available. In the USA 164.8 million people were substance abusers and an estimate of 47.6 million adults, 18 or older, had any mental illness in 2018. That means that about one in five adults in the USA experiences some form of mental illness each year, but only 41% of those affected received mental health care or services in the past year. Brazil has the greatest number of depressed individuals, in Latin America. Adults living in Sao Paulo megacity has prevalence of mental disorders at greater levels than similar surveys conducted in other areas of the world with more than one million adults with serious impairment levels. Discussion: The results show that, despite the vast socioeconomic differences between the three countries, there are correlations regarding mental health prevalence and difficulty to provide adequate services including a lack of awareness of how serious mental illness is, stigma for seeking mental illness, with comorbidity a common phenomenon, and a lack of partnership between different levels of service providers, which weakens mental health service delivery. The findings also indicate that mental health training institutions have a monumental task to prepare personnel to address the future mental health needs in each of the countries compared, which will constitute the next phase of the research.Keywords: mental health epidemiology, mental health disorder, mental health prevalence, mental health treatment
Procedia PDF Downloads 111829 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 278828 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States
Authors: Alexis Polanco Jr, Tsai Lu Liu
Abstract:
Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.Keywords: deaf, hard of hearing, design, guidelines, education, assessment
Procedia PDF Downloads 67827 Metabolomics Fingerprinting Analysis of Melastoma malabathricum L. Leaf of Geographical Variation Using HPLC-DAD Combined with Chemometric Tools
Authors: Dian Mayasari, Yosi Bayu Murti, Sylvia Utami Tunjung Pratiwi, Sudarsono
Abstract:
Melastoma malabathricum L. is an Indo-Pacific herb that has been traditionally used to treat several ailments such as wounds, dysentery, diarrhea, toothache, and diabetes. This plant is common across tropical Indo-Pacific archipelagos and is tolerant of a range of soils, from low-lying areas subject to saltwater inundation to the salt-free conditions of mountain slopes. How the soil and environmental variation influences secondary metabolite production in the herb, and an understanding of the plant’s utility as traditional medicine, remain largely unknown and unexplored. The objective of this study is to evaluate the variability of the metabolic profiles of M. malabathricum L. across its geographic distribution. By employing high-performance liquid chromatography-diode array detector (HPLC-DAD), a highly established, simple, sensitive, and reliable method was employed for establishing the chemical fingerprints of 72 samples of M. malabathricum L. leaves from various geographical locations in Indonesia. Specimens collected from six terrestrial and archipelago regions of Indonesia were analyzed by HPLC to generate chromatogram peak profiles that could be compared across each region. Data corresponding to the common peak areas of HPLC chromatographic fingerprint were analyzed by hierarchical component analysis (HCA) and principal component analysis (PCA) to extract information on the most significant variables contributing to characterization and classification of analyzed samples data. Principal component values were identified as PC1 and PC2 with 41.14% and 19.32%, respectively. Based on variety and origin, the high-performance liquid chromatography method validated the chemical fingerprint results used to screen the in vitro antioxidant activity of M. malabathricum L. The result shows that the developed method has potential values for the quality of similar M. malabathrium L. samples. These findings provide a pathway for the development and utilization of references for the identification of M. malabathricum L. Our results indicate the importance of considering geographic distribution during field-collection efforts as they demonstrate regional metabolic variation in secondary metabolites of M. malabathricum L., as illustrated by HPLC chromatogram peaks and their antioxidant activities. The results also confirm the utility of this simple approach to a rapid evaluation of metabolic variation between plants and their potential ethnobotanical properties, potentially due to the environments from whence they were collected. This information will facilitate the optimization of growth conditions to suit particular medicinal qualities.Keywords: fingerprint, high performance liquid chromatography, Melastoma malabathricum l., metabolic profiles, principal component analysis
Procedia PDF Downloads 162826 Implications of Social Rights Adjudication on the Separation of Powers Doctrine: Colombian Case
Authors: Mariam Begadze
Abstract:
Separation of Powers (SOP) has often been the most frequently posed objection against the judicial enforcement of socio-economic rights. Although a lot has been written to refute those, very rarely has it been assessed what effect the current practice of social rights adjudication has had on the construction of SOP doctrine in specific jurisdictions. Colombia is an appropriate case-study on this question. The notion of collaborative SOP in the 1991 Constitution has affected the court’s conception of its role. On the other hand, the trends in the jurisprudence have further shaped the collaborative notion of SOP. Other institutional characteristics of the Colombian constitutional law have played its share role as well. Tutela action, particularly flexible and fast judicial action for individuals has placed the judiciary in a more confrontational relation vis-à-vis the political branches. Later interventions through abstract review of austerity measures further contributed to that development. Logically, the court’s activism in this sphere has attracted attacks from political branches, which have turned out to be unsuccessful precisely due to court’s outreach to the middle-class, whose direct reliance on the court has turned into its direct democratic legitimacy. Only later have the structural judgments attempted to revive the collaborative notion behind SOP doctrine. However, the court-supervised monitoring process of implementation has itself manifested fluctuations in the mode of collaboration, moving into more managerial supervision recently. This is not surprising considering the highly dysfunctional political system in Colombia, where distrust seems to be the default starting point in the interaction of the branches. The paper aims to answer the question, what the appropriate judicial tools are to realize the collaborative notion of SOP in a context where the court has to strike a balance between the strong executive and the weak and largely dysfunctional legislative branch. If the recurrent abuse lies in the indifference and inaction of legislative branches to engage with political issues seriously, what are the tools in the court’s hands to activate the political process? The answer to this question partly lies in the court’s other strand of jurisprudence, in which it combines substantive objections with procedural ones concerning the operation of the legislative branch. The primary example is the decision on value-added tax on basic goods, in which the court invalidated the law based on the absence of sufficient deliberation in Congress on the question of the bills’ implications on the equity and progressiveness of the entire taxing system. The decision led to Congressional rejection of an identical bill based on the arguments put forward by the court. The case perhaps is the best illustration of the collaborative notion of SOP, in which the court refrains from categorical pronouncements, while does its bit for activating political process. This also legitimizes the court’s activism based on its role to counter the most perilous abuse in the Colombian context – failure of the political system to seriously engage with serious political questions.Keywords: Colombian constitutional court, judicial review, separation of powers, social rights
Procedia PDF Downloads 104825 Irradion: Portable Small Animal Imaging and Irradiation Unit
Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek
Abstract:
In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging
Procedia PDF Downloads 89