Search results for: perceptual present
919 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State
Authors: Valentina Bonetti
Abstract:
Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.Keywords: exergy, reference state, dynamic, building
Procedia PDF Downloads 226918 The Effect of Vibration Amplitude on Tissue Temperature and Lesion Size When Using a Vibrating Cardiac Catheter
Authors: Kaihong Yu, Tetsui Yamashita, Shigeaki Shingyochi, Kazuo Matsumoto, Makoto Ohta
Abstract:
During cardiac ablation, high power delivery for deeper lesion formation is limited by electrode-tissue interface overheating which can cause serious complications such as thrombus. To prevent this overheating, temperature control and open irrigation are often used. In temperature control, radiofrequency generator is adjusted to deliver the maximum output power, which maintains the electrode temperature at a target temperature (commonly 55°C or 60°C). Then the electrode-tissue interface temperature is also limited. The electrode temperature is a result of heating from the contacted tissue and cooling from the surrounding blood. Because the cooling from blood is decreased under conditions of low blood flow, the generator needs to decrease the output power. Thus, temperature control cannot deliver high power under conditions of low blood flow. In open irrigation, saline in room temperature is flushed through the holes arranged in the electrode. The electrode-tissue interface is cooled by the sufficient environmental cooling. And high power delivery can also be done under conditions of low blood flow. However, a large amount of saline infusions (approximately 1500 ml) during irrigation can cause other serious complication. When open irrigation cannot be used under conditions of low blood flow, a new overheating prevention may be required. The authors have proposed a new electrode cooling method by making the catheter vibrating. The previous work has introduced that the vibration can make a cooling effect on electrode, which may result form that the vibration could increase the flow velocity around the catheter. The previous work has also proved that increasing vibration frequency can increase the cooling by vibration. However, the effect of the vibration amplitude is still unknown. Thus, the present study investigated the effect of vibration amplitude on tissue temperature and lesion size. An agar phantom model was used as a tissue-equivalent material for measuring tissue temperature. Thermocouples were inserted into the agar to measure the internal temperature. Porcine myocardium was used for lesion size measurement. A normal ablation catheter was set perpendicular to the tissue (agar or porcine myocardium) with 10 gf contact force in 37°C saline without flow. Vibration amplitude of ± 0.5, ± 0.75, and ± 1.0 mm with a constant frequency (31 Hz or 63) was used. A temperature control protocol (45°C for agar phantom, 60°C for porcine myocardium) was used for the radiofrequency applications. The larger amplitude shows the larger lesion sizes. And the higher tissue temperatures in agar phantom are also shown with the higher amplitude. With a same frequency, the larger amplitude has the higher vibrating speed. And the higher vibrating speed will increase the flow velocity around the electrode more, which leads to a larger electrode temperature decrease. To maintain the electrode at the target temperature, ablator has to increase the output power. With the higher output power in the same duration, the released energy also increases. Consequently, the tissue temperature will be increased and lead to larger lesion sizes.Keywords: cardiac ablation, electrode cooling, lesion size, tissue temperature
Procedia PDF Downloads 371917 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry
Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay
Abstract:
The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers
Procedia PDF Downloads 407916 Sustainability of the Built Environment of Ranchi District
Authors: Vaidehi Raipat
Abstract:
A city is an expression of coexistence between its users and built environment. The way in which its spaces are animated signify the quality of this coexistence. Urban sustainability is the ability of a city to respond efficiently towards its people, culture, environment, visual image, history, visions and identity. The quality of built environment determines the quality of our lifestyles, but poor ability of the built environment to adapt and sustain itself through the changes leads to degradation of cities. Ranchi was created in November 2000, as the capital of the newly formed state Jharkhand, located on eastern side of India. Before this Ranchi was known as summer capital of Bihar and was a little larger than a town in terms of development. But since then it has been vigorously expanding in size, infrastructure as well as population. This sudden expansion has created a stress on existing built environment. The large forest covers, agricultural land, diverse culture and pleasant climatic conditions have degraded and decreased to a large extent. Narrow roads and old buildings are unable to bear the load of the changing requirements, fast improving technology and growing population. The built environment has hence been rendered unsustainable and unadaptable through fastidious changes of present era. Some of the common hazards that can be easily spotted in the built environment are half-finished built forms, pedestrians and vehicles moving on the same part of the road. Unpaved areas on street edges. Over-sized, bright and randomly placed hoardings. Negligible trees or green spaces. The old buildings have been poorly maintained and the new ones are being constructed over them. Roads are too narrow to cater to the increasing traffic, both pedestrian and vehicular. The streets have a large variety of activities taking place on them, but haphazardly. Trees are being cut down for road widening and new constructions. There is no space for greenery in the commercial as well as old residential areas. The old infrastructure is deteriorating because of poor maintenance and the economic limitations. Pseudo understanding of functionality as well as aesthetics drive the new infrastructure. It is hence necessary to evaluate the extent of sustainability of existing built environment of the city and create or regenerate the existing built environment into a more sustainable and adaptable one. For this purpose, research titled “Sustainability of the Built Environment of Ranchi District” has been carried out. In this research the condition of the built environment of Ranchi are explored so as to figure out the problems and shortcomings existing in the city and provide for design strategies that can make the existing built-environment sustainable. The built environment of Ranchi that include its outdoor spaces like streets, parks, other open areas, its built forms as well as its users, has been analyzed in terms of various urban design parameters. Based on which strategies have been suggested to make the city environmentally, socially, culturally and economically sustainable.Keywords: adaptable, built-environment, sustainability, urban
Procedia PDF Downloads 237915 A High Amylose-Content and High-Yielding Elite Line Is Favorable to Cook 'Nanhan' (Semi-Soft Rice) for Nursing Care Food Particularly for Serving Aged Persons
Authors: M. Kamimukai, M. Bhattarai, B. B. Rana, K. Maeda, H. B. Kc, T. Kawano, M. Murai
Abstract:
Most of the aged people older than 70 have difficulty in chewing and swallowing more or less. According to magnitude of this difficulty, gruel, “nanhan” (semi-soft rice) and ordinary cooked rice are served in general, particularly in sanatoriums and homes for old people in Japan. Nanhan is the name of a cooked rice used in Japan, having softness intermediate between gruel and ordinary cooked rice, which is boiled with intermediate amount of water between those of the latter two kinds of cooked rice. In the present study, nanhan was made in the rate of 240g of water to 100g of milled rice with an electric rice cooker. Murai developed a high amylose-content and high-yielding elite line ‘Murai 79’. Sensory eating-quality test was performed for nanhan and ordinary cooked rice of Murai 79 and the standard variety ‘Hinohikari’ which is a high eating-quality variety representative in southern Japan. Panelists (6 to 14 persons) scored each cooked rice in six items viz. taste, stickiness, hardness, flavor, external appearance and overall evaluation. Grading (-3 ~ +3) in each trait was performed, regarding the value of the standard variety Hinohikari as 0. Paddy rice produced in a farmer’s field in 2013 and 2014 and in an experimental field of Kochi University in 2015 and 2016 were used for the sensory test. According to results of the sensory eating-quality test for nanhan, Murai 79 is higher in overall evaluation than Hinohikari in the four years. The former was less sticky than the latter in the four years, but the former was statistically significantly harder than the latter throughout the four years. In external appearance, the former was significantly higher than the latter in the four years. In the taste, the former was significantly higher than the latter in 2014, but significant difference was not noticed between them in the other three years. There were no significant differences throughout the four years in flavor. Regarding amylose content, Murai 79 is higher by 3.7 and 5.7% than Hinohikari in 2015 and 2016, respectively. As for protein content, Murai 79 was higher than Hinohikari in 2015, but the former was lower than the latter in 2016. Consequently, the nanhan of Murai 79 was harder and less sticky, keeping the shape of grains as compared with that of Hinohikari, which may be due to its higher amylose content. Hence, the nanhan of Murai 79 may be recognized as grains more easily in a human mouth, which could make easier the continuous performance of mastication and deglutition particularly in aged persons. Regarding ordinary cooked rice, Murai 79 was similar to or higher in both overall evaluation and external appearance as compared with Hinohikari, despite its higher hardness and lower stickiness. Additionally, Murai 79 had brown-rice yield of 1.55 times as compared with Hinohikari, suggesting that it would enable to supply inexpensive rice for making nanhan with high quality particularly for aged people in Japan.Keywords: high-amylose content, high-yielding rice line, nanhan, nursing care food, sensory eating quality test
Procedia PDF Downloads 138914 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS
Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana
Abstract:
Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI
Procedia PDF Downloads 113913 Differentially Expressed Protein Biomarkers in Early and Advanced Stage Young Triple-Negative Breast Cancer Patients
Authors: Shamim Mushtaq, Moazzam Shahid
Abstract:
Breast cancer (BC) claims the lives of half a million women every year and is the most common cause of death in the developing world. In 2019, it was estimated that BC alone accounts for 15% of all cancer deaths in younger women (aged < 45 years old) with advanced-stage lung metastasis. According to the World Health Organization & International Union against Cancer, in Asia, a high number of cancer-related deaths will be observed in 2020, whereas the burden will be reduced in Western countries due to awareness about the disease, better health facilities and advanced treatments. In the last 15 years, it has been reported that the incidence of BC has increased by 1.1% among Asian compared to the US population from 2003 to 2012. To date, several BC biological subtypes have been reported so far, which are associated with different treatment responses. The heterogeneity and diversity of BC reflected these different subtypes, including Luminal A (23.7% prevalence) and B (38.8% prevalence) that have pathological estrogen receptor (ER+)-positive tumors, the human epidermal growth factor receptor 2 (HER2) (11.2% prevalence) and triple-negative breast cancer (TNBC) (25% prevalence). According to Shaukat Khanum Memorial Cancer Hospital and Research Centre – Pakistan, ten years of data showed that among 636 BC patients, 30.5% had TNBC who were <40 years of age, which is an extremely alarming situation. Therefore, there is a dire need to explore and develop therapeutic targets for the treatment of early TNBC. Since the last decade, unfortunately, there has been little success in understanding the complexity of TNBC and in discovering new biological therapeutic targets. However, conventional chemotherapy is the only choice of treatment for TNBC patients. Many investigators revealed advances in multi-omics (multiple "omes", e.g., genome, proteome, transcriptome, epigenome, and microbiome) which were later identified as actionable targets and increased prevalence in TNBC patients. However, various drugs have been identified so far which are related to a particular diagnostic and prognostic biomarker. For example, Epidermal growth factor receptor ( EGFR or ErbB-1), HER-2/neu (ErbB-2), HER-3 (ErbB-3), and HER-4 (ErbB-4). Protein Transglin-2 (TAGLN 2 ) and Profilins-1 (Pfn-1 ) are the ubiquitously expressed large family of proteins present in all eukaryotes, enabling actin cytoskeletal reorganization. It is known that the oncogenic transformation of cells is accompanied by alteration in the actin cytoskeleton. There are causal connections between altered expression of actin cytoskeletal regulators and cancer progression. Our case-control study identified TAGLN-2 and Pfn-1 proteins in TNBC blood by mass spectrometry. Both TAGLN-2 and Pfn-1 proteins are differentially expressed in early and advanced stages of TNBS patients, which could be potential predictors or therapeutic targets for TNBC.Keywords: TNBC, blood biomarkers, mass spectrometry, qPCR, ELISA
Procedia PDF Downloads 43912 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions
Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita
Abstract:
Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly
Procedia PDF Downloads 246911 Construal Level Perceptions of Environmental vs. Social Sustainability in Online Fashion Shopping Environments
Authors: Barbara Behre, Verolien Cauberghe, Dieneke Van de Sompel
Abstract:
Sustainable consumption is on the rise, yet it has still not entered the mainstream in several industries, such as the fashion industry. In online fashion contexts, sustainability cues have been used to signal the sustainable benefits of certain garments to promote sustainable consumption. These sustainable cues may focus on the ecological or social dimension of sustainability. Since sustainability, in general, relates to distant, abstract benefits, the current study aims to examine if and how psychological distance may mediate the effects of exposure to different sustainability cues on consumption outcomes. Following the framework of Construal Level Theory of Psychological Distance, reduced psychological distance renders the construal level more concrete, which may influence attitudes and subsequent behavior in situations like fashion shopping. Most studies investigated sustainability as a composite, failing to differentiate between ecological and societal aspects of sustainability. The few studies examining sustainability more in detail uncovered that environmental sustainability is rather perceived in abstract cognitive construal, whereas social sustainability is linked to concrete construal. However, the construal level affiliation of the sustainability dimensions likely is not universally applicable to different domains and stages of consumption, which further suggest a need to clarify the relationships between environmental and social sustainability dimensions and the construal level of psychological distance within fashion brand consumption. While psychological distance and construal level have been examined in the context of sustainability, these studies yielded mixed results. The inconsistent findings of past studies might be due to the context-dependence of psychological distance as inducing construal differently in diverse situations. Especially in a hedonic consumption context like online fashion shopping, the role of visual processing of information could determine behavioural outcomes as linked to situational construal. Given the influence of the mode of processing on psychological distance and construal level, the current study examines the moderating role of verbal versus non-verbal presentation of the sustainability cues. In a 3 (environmental sustainability vs. social sustainability vs. control) x 2 (non-verbal message vs. verbal message) between subjects experiment, the present study thus examines how consumers evaluate sustainable brands in online shopping contexts in terms of psychological distance and construal level, as well as the impact on brand attitudes and buying intentions. The results among 246 participants verify the differential impact of the sustainability dimensions on fashion brand purchase intent as mediated by construal level and perceived psychological distance. The ecological sustainability cue is perceived as more concrete, which might be explained by consumer bias induced by the predominance of pro-environmental sustainability messages. The verbal versus non-verbal presentation of the sustainability cue neither had a significant influence on distance perceptions and construal level nor on buying intentions. This study offers valuable contributions to the sustainable consumption literature, as well as a theoretical basis for construal-level framing as applied in sustainable fashion branding.Keywords: construal level theory, environmental vs social sustainability, online fashion shopping, sustainable fashion
Procedia PDF Downloads 103910 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 142909 Pueblos Mágicos in Mexico: The Loss of Intangible Cultural Heritage and Cultural Tourism
Authors: Claudia Rodriguez-Espinosa, Erika Elizabeth Pérez Múzquiz
Abstract:
Since the creation of the “Pueblos Mágicos” program in 2001, a series of social and cultural events had directly affected the heritage conservation of the 121 registered localities until 2018, when the federal government terminated the program. Many studies have been carried out that seek to analyze from different perspectives and disciplines the consequences that these appointments have generated in the “Pueblos Mágicos.” Multidisciplinary groups such as the one headed by Carmen Valverde and Liliana López Levi, have brought together specialists from all over the Mexican Republic to create a set of diagnoses of most of these settlements, and although each one has unique specificities, there is a constant in most of them that has to do with the loss of cultural heritage and that is related to transculturality. There are several factors identified that have fostered a cultural loss, as a direct reflection of the economic crisis that prevails in Mexico. It is important to remember that the origin of this program had as its main objective to promote the growth and development of local economies since one of the conditions for entering the program is that they have less than 20,000 inhabitants. With this goal in mind, one of the first actions that many “Pueblos Mágicos” carried out was to improve or create an infrastructure to receive both national and foreign tourists since this was practically non-existent. Creating hotels, restaurants, cafes, training certified tour guides, among other actions, have led to one of the great problems they face: globalization. Although by itself it is not bad, its impact in many cases has been negative for heritage conservation. The entry into and contact with new cultures has led to the undervaluation of cultural traditions, their transformation and even their total loss. This work seeks to present specific cases of transformation and loss of cultural heritage, as well as to reflect on the problem and propose scenarios in which the negative effects can be reversed. For this text, 36 “Pueblos Mágicos” have been selected for study, based on those settlements that are cited in volumes I and IV (the first and last of the collection) of the series produced by the multidisciplinary group led by Carmen Valverde and Liliana López Levi (researchers from UNAM and UAM Xochimilco respectively) in the project supported by CONACyT entitled “Pueblos Mágicos. An interdisciplinary vision”, of which we are part. This sample is considered representative since it forms 30% of the total of 121 “Pueblos Mágicos” existing at that moment. With this information, the elements of its intangible heritage loss or transformation have been identified in every chapter based on the texts written by the participants of that project. Finally, this text shows an analysis of the effects that this federal program, as a public policy applied to 132 populations, has had on the conservation or transformation of the intangible cultural heritage of the “Pueblos Mágicos.” Transculturality, globalization, the creation of identities and the desire to increase the flow of tourists have impacted the changes that traditions (main intangible cultural heritage) have had in the 18 years that the federal program lasted.Keywords: public policies, cultural tourism, heritage preservation, pueblos mágicos program
Procedia PDF Downloads 190908 Teaching for Social Justice: Towards Education for Sustainable Development
Authors: Nashwa Moheyeldine
Abstract:
Education for sustainable development (ESD) aims to preserve the rights of the present and future generations as well as preserving the globe, both humans and nature. ESD should aim not only to bring about consciousness of the current and future issues, but also to foster student agency to bring about change at schools, communities and nations. According to the Freirian concept of conscientização, (conscientization) — “learning to perceive social, political, and economic contradictions, and to take action against the oppressive elements of reality”, education aims to liberate people to understand and act upon their worlds. Social justice is greatly intertwined with a nation’s social, political and economic rights, and thus, should be targeted through ESD. “Literacy researchers have found that K-12 students who engage in social justice inquiries develop vital academic knowledge and skills, critical understandings about oppression in the world, and strong dispositions to continue working toward social justice beyond the initial inquiries they conduct”. Education for social justice greatly equips students with the critical thinking skills and sense of agency, that are required for responsible decision making that would ensure a sustainable world. In fact teaching for social justice is intersecting with many of the pedagogies such as multicultural education, cultural relevant pedagogy, education for sustainable development, critical theory pedagogy, (local and global) citizenship education, all of which aim to prepare students for awareness, responsibility and agency. Social justice pedagogy has three specific goals, including helping students develop 1) a sociopolitical consciousness - an awareness of the symbiotic relationship between the social and political factors that affect society, 2) a sense of agency, the freedom to act on one’s behalf and to feel empowered as a change agent, and 3) positive social and cultural identities. The keyword to social justice education is to expose the realities to the students, and challenge the students not only to question , but also to change. Social justice has been usually discussed through the subjects of history and social sciences, however, an interdisciplinary approach is essential to enhance the students’ understanding of their world. Teaching social justice through various subjects is also important, as it make students’ learning relevant to their lives. The main question that this paper seeks to answer is ‘How could social justice be taught through different subjects and tools, such as mathematics, literature through story-telling, geography, and service learning will be shown in this paper. Also challenges to education for social justice will be described. Education is not a neutral endeavor, but is either oriented toward the cause of liberation or in support of domination. In fact , classrooms can be “a microcosm of the emancipatory societies we seek to encourage”, education for the 21st century should be relevant to students' lives where it exposes life's realities to them. Education should also provide students with the basics of school subjects with the bigger goal of helping them make the world a better, more just place to live in.Keywords: teaching for social justice, student agency, citizenship education, education
Procedia PDF Downloads 403907 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 113906 Human Immuno-Deficiency Virus Co-Infection with Hepatitis B Virus and Baseline Cd4+ T Cell Count among Patients Attending a Tertiary Care Hospital, Nepal
Authors: Soma Kanta Baral
Abstract:
Background: Since 1981, when the first AIDS case was reported, worldwide, more than 34 million people have been infected with HIV. Almost 95 percent of the people infected with HIV live in developing countries. As HBV & HIV share similar routes of transmission by sexual intercourse or drug use by parenteral injection, co-infection is common. Because of the limited access to healthcare & HIV treatment in developing countries, HIV-infected individuals are present late for care. Enumeration of CD4+ T cell count at the time of diagnosis has been useful to initiate the therapy in HIV infected individuals. The baseline CD4+ T cell count shows high immunological variability among patients. Methods: This prospective study was done in the serology section of the Department of Microbiology over a period of one year from august 2012 to July 2013. A total of 13037 individuals subjected for HIV test were included in the study comprising of 4982 males & 8055 females. Blood sample was collected by vein puncture aseptically with standard operational procedure in clean & dry test-tube. All blood samples were screened for HIV as described by WHO algorithm by Immuno-chromatography rapid kits. Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. After informed consent, HIV positive individuals were screened for HBsAg by Immuno-chromatography rapid kits (Hepacard). Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. EDTA blood samples were collected from the HIV sero-positive individuals for baseline CD4+ T count. Then, CD4+ T cells count was determined by using FACS Calibur Flow Cytometer (BD). Results: Among 13037 individuals screened for HIV, 104 (0.8%) were found to be infected comprising of 69(66.34%) males & 35 (33.65%) females. The study showed that the high infection was noted in housewives (28.7%), active age group (30.76%), rural area (56.7%) & in heterosexual route (80.9%) of transmission. Out of total HIV infected individuals, distribution of HBV co-infection was found to be 6(5.7%). All co- infected individuals were married, male, above the age of 25 years & heterosexual route of transmission. Baseline CD4+ T cell count of HIV infected patient was found higher (mean CD4+ T cell count; 283cells/cu.mm) than HBV co-infected patients (mean CD4+ T cell count; 91 cells/cu.mm). Majority (77.2%) of HIV infected & all co-infected individuals were presented in our center late (CD4+ T cell count;< 350/cu. mm) for diagnosis and care. Majority of co- infected 4 (80%) were late presented with advanced AIDS stage (CD4+ count; <200/cu.mm). Conclusions: The study showed a high percentage of HIV sero-positive & co- infected individuals. Baseline CD4+ T cell count of majority of HIV infected individuals was found to be low. Hence, more sustained and vigorous awareness campaigns & counseling still need to be done in order to promote early diagnosis and management.Keywords: HIV/AIDS, HBsAg, co-infection, CD4+
Procedia PDF Downloads 215905 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 278904 Utilization of Fly Ash Amended Sewage Sludge as Sustainable Building Material
Authors: Kaling Taki, Rohit Gahlot, Manish Kumar
Abstract:
Disposal of Sewage Sludge (SS) is a big issue especially in developing nation like India, where there is no control in the dynamicity of SS produced. The present research work demonstrates the potential application of SS amended with varying percentage (0-100%) of Fly Ash (FA) for brick manufacturing as an alternative of SS management. SS samples were collected from Jaspur sewage treatment plant (Ahmedabad, India) and subjected to different preconditioning treatments: (i) atmospheric drying (ii) pulverization (iii) heat treatment in oven (110°C, moisture removal) and muffle furnace (440°C, organic content removal). Geotechnical parameters of the SS were obtained as liquid limit (52%), plastic limit (24%), shrinkage limit (10%), plasticity index (28%), differential free swell index (DFSI, 47%), silt (68%), clay (27%), organic content (5%), optimum moisture content (OMC, 20%), maximum dry density (MDD, 1.55gm/cc), specific gravity (2.66), swell pressure (57kPa) and unconfined compressive strength (UCS, 207kPa). For FA liquid limit, plastic limit and specific gravity was 44%, 0% and 2.2 respectively. Initially, for brick casting pulverized SS sample was heat treated in a muffle furnace around 440℃ (5 hours) for removal of organic matter. Later, mixing of SS, FA and water by weight ratio was done at OMC. 7*7*7 cm3 sample mold was used for casting bricks at MDD. Brick samples were then first dried in room temperature for 24 hours, then in oven at 100℃ (24 hours) and finally firing in muffle furnace for 1000℃ (10 hours). The fired brick samples were then cured for 3 days according to Indian Standards (IS) common burnt clay building bricks- specification (5th revision). The Compressive strength of brick samples (0, 10, 20, 30, 40, 50 ,60, 70, 80, 90, 100%) of FA were 0.45, 0.76, 1.89, 1.83, 4.02, 3.74, 3.42, 3.19, 2.87, 0.78 and 4.95MPa when evaluated through compressive testing machine (CTM) for a stress rate of 14MPa/min. The highest strength was obtained at 40% FA mixture i.e. 4.02MPa which is much higher than the pure SS brick sample. According to IS 1077: 1992 this combination gives strength more than 3.5 MPa and can be utilized as common building bricks. The loss in weight after firing was much higher than the oven treatment, this might be due to degradation temperature higher than 100℃. The thermal conductivity of the fired brick was obtained as 0.44Wm-1K-1, indicating better insulation properties than other reported studies. TCLP (Toxicity characteristic leaching procedure) test of Cr, Cu, Co, Fe and Ni in raw SS was found as 69, 70, 21, 39502 and 47 mg/kg. The study positively concludes that SS and FA at optimum ratio can be utilized as common building bricks such as partitioning wall and other small strength requirement works. The uniqueness of the work is it emphasizes on utilization of FA for stabilizing SS as construction material as a replacement of natural clay as reported in existing studies.Keywords: Compressive strength, Curing, Fly Ash, Sewage Sludge.
Procedia PDF Downloads 111903 Algal/Bacterial Membrane Bioreactor for Bioremediation of Chemical Industrial Wastewater Containing 1,4 Dioxane
Authors: Ahmed Tawfik
Abstract:
Oxidation of 1,4 dioxane produces metabolites by-products involving glycolaldehyde and acids that have geno- and cytotoxicity impact on microbial degradation. Thereby, the incorporation of algae with bacteria in the treatment system would eliminate and overcome the accumulation of metabolites that are utilized as a carbon source for the build-up of biomass. Therefore, the aim of the present study is to assess the potential of algae/bacteria-based membrane bioreactor (AB-MBR) for biodegradation of 1,4 dioxane-rich wastewater at a high imposed loading rate. Three identical reactors, i.e., AB-MBR1, AB-MBR2, and AB-MBR3, were operated in parallel at 1,4 dioxane loading rates of 641.7, 320.9, and 160.4 mg/L. d., and HRTs of 6.0, 12 and 24 h. respectively. The AB-MBR1 achieved 1,4 dioxane removal rate of 263.7 mg/L.d., where the residual value in the treated effluent amounted to 94.4±22.9 mg/L. Reducing the 1,4 dioxane loading rate (LR) to 320.9 mg/L.d in the AB-MBR2 maximized the removal rate efficiency of 265.9 mg/L.d., with a removal efficiency of 82.8±3.2%. The minimum value of 1,4 dioxane of 17.3±1.8 mg/L in the treated effluent of AB-MBR3 was obtained at an HRT of 24.0 h and loading rate of 160.4 mg/L.d. The mechanism of 1,4 dioxane degradation in AB-MBR was a combination of volatilization (8.03±0.6%), UV oxidation (14.1±0.9%), microbial biodegradation (49.1±3.9%) and absorption/uptake and assimilation by algae (28.8±2.%). Further, the Thioclava, Afipia, and Mycobacterium genera oxidized and produced the required enzymes for hydrolysis and cleavage of the dioxane ring into 2-hydroxy-1,4 dioxane. Moreover, the fungi, i.e., Basidiomycota and Cryptomycota, played a big role in the degradation of the 1,4 dioxane into 2-hydroxy-1,4 dioxane. Xanthobacter and Mesorhizobium were involved in the metabolism process by secreting alcohol dehydrogenase (ADH), aldehyde dehydrogenase (ALDH), and glycolate oxidase. Bacteria and fungi produced dehydrogenase (DH) for the transformation of 2-hydroxy-1,4 dioxane into 2-hydroxy-ethoxyacetaldehyde. The latter is converted into Ethylene glycol by Aldehyde hydrogenase (ALDH). Ethylene glycol is oxidized into acids using Alcohol hydrogenase (ADH). The Diatomea, Chlorophyta, and Streptophyta utilize the metabolites for biomass assimilation and produce the required oxygen for further oxidation of the dioxane and its metabolites by-products of bacteria and fungi. The major portion of metabolites (ethylene glycol, glycolic acid, and oxalic acid were removed due to uptake and absorption by algae (43±4.3%), followed by adsorption (18.4±0.9%). The volatilization and UV oxidation contribution for the degradation of metabolites were 8.7±0.7% and 12.3±0.8%, respectively. The capabilities of genera Defluviimonas, Thioclava, Luteolibacter, and Afipia. The genera of Defluviimonas, Thioclava, Luteolibacter, and Mycobacterium were grown under a high 1,4 dioxane LR of 641.7 mg/L.d. The Chlorophyta (4.1-43.6%), Streptophyta (2.5-21.7%), and Diatomea (0.8-1.4%) phyla were dominant for degradation of 1,4 dioxane. The results of this study strongly demonstrated that the bioremediation and bioaugmentation process can safely remove 1,4 dioxane from industrial wastewater while minimizing environmental concerns and reducing economic costs.Keywords: wastewater, membrane bioreactor, bacterial community, algal community
Procedia PDF Downloads 43902 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study
Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski
Abstract:
Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.Keywords: balance, joint position sense, proprioception, whole body vibration
Procedia PDF Downloads 328901 Implication of Woman’s Status on Child Health in India
Authors: Rakesh Mishra
Abstract:
India’s Demography has always amazed the world because of its unprecedented outcomes in the presence of multifaceted socioeconomic and geographical characteristics. Being the first one to implement family panning in 1952, it occupies 2nd largest population of the world, with some of its state like Uttar Pradesh contributing 5th largest population to the world population surpassing Brazil. Being the one with higher in number it is more prone to the demographic disparity persisting into its territories brought upon by the inequalities in availability, accessibility and attainability of socioeconomic and various other resources. Fifth goal of Millennium Development Goal emphasis to improve maternal and child health across the world as Children’s development is very important for the overall development of society and the best way to develop national human resources is to take care of children. The target is to reduce the infant deaths by three quarters between 1990 and 2015. Child health status depends on the care and delivery by trained personnel, particularly through institutional facilities which is further associated with the status of the mother. However, delivery in institutional facilities and delivery by skilled personnel are rising slowly in India. The main objective of the present study is to measure the child health status on based on the educational and occupational background of the women in India. Study indicates that women education plays a very crucial role in deciding the health of the new born care and access to family planning, but the women autonomy indicates to have mixed results in different states of India. It is observed that rural women are 1.61 times more likely to exclusive breastfed their children compared to urban women. With respect to Hindu category, women belonging to other religious community were 21 percent less likely to exclusive breastfed their child. Taking scheduled caste as reference category, the odds of exclusive breastfeeding is found to be decreasing in comparison to other castes, and it is found to be significant among general category. Women of high education status have higher odds of using family planning methods in most of the southern states of India. By and large, girls and boys are about equally undernourished. Under nutrition is generally lower for first births than for subsequent births and consistently increases with increasing birth order for all measures of nutritional status. It is to be noted that at age 12-23 months, when many children are being weaned from breast milk, 30 percent of children are severely stunted and around 21 percent are severely underweight. So, this paper presents the evidence on the patterns of prevailing child health status in India and its states with reference to the mother socioeconomics and biological characteristics and examines trends in these, and discusses plausible explanations.Keywords: immunization, exclusive breastfeeding, under five mortality, binary logistic regression, ordinal regression and life table
Procedia PDF Downloads 265900 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 132899 Irradion: Portable Small Animal Imaging and Irradiation Unit
Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek
Abstract:
In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging
Procedia PDF Downloads 90898 Multimodal Analysis of News Magazines' Front-Page Portrayals of the US, Germany, China, and Russia
Authors: Alena Radina
Abstract:
On the global stage, national image is shaped by historical memory of wars and alliances, government ideology and particularly media stereotypes which represent countries in positive or negative ways. News magazine covers are a key site for national representation. The object of analysis in this paper is the portrayals of the US, Germany, China, and Russia in the front pages and cover stories of “Time”, “Der Spiegel”, “Beijing Review”, and “Expert”. Political comedy helps people learn about current affairs even if politics is not their area of interest, and thus satire indirectly sets the public agenda. Coupled with satirical messages, cover images and the linguistic messages embedded in the covers become persuasive visual and verbal factors, known to drive about 80% of magazine sales. Preliminary analysis identified satirical elements in magazine covers, which are known to influence and frame understandings and attract younger audiences. Multimodal and transnational comparative framing analyses lay the groundwork to investigate why journalists, editors and designers deploy certain frames rather than others. This research investigates to what degree frames used in covers correlate with frames within the cover stories and what these framings can tell us about media professionals’ representations of their own and other nations. The study sample includes 32 covers consisting of two covers representing each of the four chosen countries from the four magazines. The sampling framework considers two time periods to compare countries’ representation with two different presidents, and between men and women when present. The countries selected for analysis represent each category of the international news flows model: the core nations are the US and Germany; China is a semi-peripheral country; and Russia is peripheral. Examining textual and visual design elements on the covers and images in the cover stories reveals not only what editors believe visually attracts the reader’s attention to the magazine but also how the magazines frame and construct national images and national leaders. The cover is the most powerful editorial and design page in a magazine because images incorporate less intrusive framing tools. Thus, covers require less cognitive effort of audiences who may therefore be more likely to accept the visual frame without question. Analysis of design and linguistic elements in magazine covers helps to understand how media outlets shape their audience’s perceptions and how magazines frame global issues. While previous multimodal research of covers has focused mostly on lifestyle magazines or newspapers, this paper examines the power of current affairs magazines’ covers to shape audience perception of national image.Keywords: framing analysis, magazine covers, multimodality, national image, satire
Procedia PDF Downloads 102897 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation
Authors: Joanna Gerszon, Aleksandra Rodacka
Abstract:
In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation
Procedia PDF Downloads 167896 Ecosystem Modeling along the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty, R. Gayathri, V. Ranga Rao
Abstract:
Modeling on coupled physical and biogeochemical processes of coastal waters is vital to identify the primary production status under different natural and anthropogenic conditions. About 7, 500 km length of Indian coastline is occupied with number of semi enclosed coastal bodies such as estuaries, inlets, bays, lagoons, and other near shore, offshore shelf waters, etc. This coastline is also rich in wide varieties of ecosystem flora and fauna. Directly/indirectly extensive domestic and industrial sewage enter into these coastal water bodies affecting the ecosystem character and create environment problems such as water quality degradation, hypoxia, anoxia, harmful algal blooms, etc. lead to decline in fishery and other related biological production. The present study is focused on the southeast coast of India, starting from Pulicat to Gulf of Mannar, which is rich in marine diversity such as lagoon, mangrove and coral ecosystem. Three dimensional Massachusetts Institute of Technology general circulation model (MITgcm) along with Darwin biogeochemical module is configured for the western Bay of Bengal (BoB) to study the biogeochemistry over this region. The biogeochemical module resolves the cycling of carbon, phosphorous, nitrogen, silica, iron and oxygen through inorganic, living, dissolved and particulate organic phases. The model domain extends from 4°N-16.5°N and 77°E-86°E with a horizontal resolution of 1 km. The bathymetry is derived from General Bathymetric Chart of the Oceans (GEBCO), which has a resolution of 30 sec. The model is initialized by using the temperature, salinity filed from the World Ocean Atlas (WOA2013) of National Oceanographic Data Centre with a resolution of 0.25°. The model is forced by the surface wind stress from ASCAT and the photosynthetically active radiation from the MODIS-Aqua satellite. Seasonal climatology of nutrients (phosphate, nitrate and silicate) for the southwest BoB region are prepared using available National Institute of Oceanography (NIO) in-situ data sets and compared with the WOA2013 seasonal climatology data. The model simulations with the two different initial conditions viz., WOA2013 and the generated NIO climatology, showed evident changes in the concentration and the evolution of the nutrients in the study region. It is observed that the availability of nutrients is more in NIO data compared to WOA in the model domain. The model simulated primary productivity is compared with the spatially distributed satellite derived chlorophyll data and at various locations with the in-situ data. The seasonal variability of the model simulated primary productivity is also studied.Keywords: Bay of Bengal, Massachusetts Institute of Technology general circulation model, MITgcm, biogeochemistry, primary productivity
Procedia PDF Downloads 141895 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors
Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy
Abstract:
The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.Keywords: hyperkalemia, drift, AACVD, organosilicon
Procedia PDF Downloads 123894 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia
Authors: Merih Welay Welesilassie
Abstract:
Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray
Procedia PDF Downloads 65893 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application
Abstract:
On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.Keywords: compass error, GPS, maritime navigation, mobile augmented reality
Procedia PDF Downloads 330892 Association of Ovine Lymphocyte Antigen (OLA) with the Parasitic Infestation in Kashmiri Sheep Breeds
Authors: S. A. Bhat, Ahmad Arif, Muneeb U. Rehman, Manzoor R Mir, S. Bilal, Ishraq Hussain, H. M Khan, S. Shanaz, M. I Mir, Sabhiya Majid
Abstract:
Background: Geologically Climatic conditions of the state range from sub-tropical (Jammu), temperate (Kashmir) to cold artic (Ladakh) zones, which exerts significant influence on its agro-climatic conditions. Gastrointestinal parasitism is a major problem in sheep production worldwide. Materials and Methods: The present study was to evaluate the resistance status of sheep breeds reared in Kashmir Valley for natural resistance against Haemonchus contortus by natural pasture challenge infection. Ten microsatellite markers were used in the study for evaluation of association of Ovar-MHC with parasitic resistance in association with biochemical and parasitological parameters. Following deworming, 500 animals were subjected to selected contaminated pastures in a vicinity of the livestock farms of SKUAST-K and Sheep Husbandry Kashmir. For each animal about 10-15 ml blood was collected aseptically for molecular and biochemical analysis. Weekly fecal samples (3g) were taken, directly from the rectum of all experimental animals and examined for Fecal egg count (FEC) with modified McMaster technique. Packed cell volume (PCV) was determined within 2-5 h of blood collection, all the biochemical parameters were determined in serum by semi automated analyzer. DNA was extracted from all the blood samples with phenol-chloroform method. Microsatellite analysis was done by denaturing sequencing gel electrophoresis Results: Overall sheep from Bakerwal breed followed by Corriediale breed performed relatively better in the trial; however difference between breeds remained low. Both significant (P<0.05) and non-significant differences with respect to resistance against haemonchosis were noted at different intervals in all the parameters.. All the animals were typed for the microsatellites INRA132, OarCP73, DRB1 (U0022), OLA-DQA2, BM1818, TFAP2A, HH56, BM1815, IL-3 and BM-1258. An association study including the effect of FEC, PCV, TSP, SA, LW, and the number of alleles within each marker was done. All microsatellite markers showed degree of heterozygosity of 0.72, 0.72, 0.75, 0.62, 0.84, 0.69, 0.66, 0.65, 0.73 and 0.68 respectively. Significant association between alleles and the parameters measured were only found for the OarCP73, OLA-DQA2 and BM1815 microsatellite marker. Standard alleles of the above markers showed significant effect on the TP, SA and body weight. The three sheep breeds included in the study responded differently to the nematode infection, which may be attributed to their differences in their natural resistance against nematodes. Conclusion: Our data confirms that some markers (OarCP73, OLA-DQA2 and BM1815) within Ovar-MHC are associated with phenotypic parameters of resistance and suggest superiority of Bakerwal sheep breed in natural resistance against Haemonchus contortus.Keywords: Ovar-Mhc, ovine leukocyte antigen (OLA), sheep, parasitic resistance, Haemonchus contortus, phenotypic & genotypic markers
Procedia PDF Downloads 714891 Experience of Two Major Research Centers in the Diagnosis of Cardiac Amyloidosis from Transthyretin
Authors: Ioannis Panagiotopoulos, Aristidis Anastasakis, Konstantinos Toutouzas, Ioannis Iakovou, Charalampos Vlachopoulos, Vasilis Voudris, Georgios Tziomalos, Konstantinos Tsioufis, Efstathios Kastritis, Alexandros Briassoulis, Kimon Stamatelopoulos, Alexios Antonopoulos, Paraskevi Exadaktylou, Evanthia Giannoula, Anastasia Katinioti, Maria Kalantzi, Evangelos Leontiadis, Eftychia Smparouni, Ioannis Malakos, Nikolaos Aravanis, Argyrios Doumas, Maria Koutelou
Abstract:
Introduction: Cardiac amyloidosis from Transthyretin (ATTR-CA) is an infiltrative disease characterized by the deposition of pathological transthyretin complexes in the myocardium. This study describes the characteristics of patients diagnosed with ATTR-CA from 2019 until present at the Nuclear Medicine Department of Onassis Cardiac Surgery Center and AHEPA Hospital. These centers have extensive experience in amyloidosis and modern technological equipment for its diagnosis. Materials and Methods: Records of consecutive patients (N=73) diagnosed with any type of amyloidosis were collected, analyzed, and prospectively followed. The diagnosis of amyloidosis was made using specific myocardial scintigraphy with Tc-99m DPD. Demographic characteristics, including age, gender, marital status, height, and weight, were collected in a database. Clinical characteristics, such as amyloidosis type (ATTR and AL), serum biomarkers (BNP, troponin), electrocardiographic findings, ultrasound findings, NYHA class, aortic valve replacement, device implants, and medication history, were also collected. Some of the most significant results are presented. Results: A total of 73 cases (86% male) were diagnosed with amyloidosis over four years. The mean age at diagnosis was 82 years, and the main symptom was dyspnea. Most patients suffered from ATTR-CA (65 vs. 8 with AL). Out of all the ATTR-CA patients, 61 were diagnosed with wild-type and 2 with two rare mutations. Twenty-eight patients had systemic amyloidosis with extracardiac involvement, and 32 patients had a history of bilateral carpal tunnel syndrome. Four patients had already developed polyneuropathy, and the diagnosis was confirmed by DPD scintigraphy, which is known for its high sensitivity. Among patients with isolated cardiac involvement, only 6 had left ventricular ejection fraction below 40%. The majority of ATTR patients underwent tafamidis treatment immediately after diagnosis. Conclusion: In conclusion, the experiences shared by the two centers and the continuous exchange of information provide valuable insights into the diagnosis and management of cardiac amyloidosis. Clinical suspicion of amyloidosis and early diagnostic approach are crucial, given the availability of non-invasive techniques. Cardiac scintigraphy with DPD can confirm the presence of the disease without the need for a biopsy. The ultimate goal still remains continuous education and awareness of clinical cardiologists so that this systemic and treatable disease can be diagnosed and certified promptly and treatment can begin as soon as possible.Keywords: amyloidosis, diagnosis, myocardial scintigraphy, Tc-99m DPD, transthyretin
Procedia PDF Downloads 91890 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143