Search results for: change making
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11710

Search results for: change making

640 Health Literacy: Collaboration between Clinician and Patient

Authors: Cathy Basterfield

Abstract:

Issue: To engage in one’s own health care, health professionals need to be aware of an individual’s specific skills and abilities for best communication. One of the most discussed is health literacy. One of the assumed skills and abilities for adults is an individuals’ health literacy. Background: A review of publicly available health content appears to assume all adult readers will have a broad and full capacity to read at a high level of literacy, often at a post-school education level. Health information writers and clinicians need to recognise one critical area for why there may be little or no change in a person’s behaviour, or no-shows to appointments. Perhaps unintentionally, they are miscommunicating with the majority of the adult population. Health information contains many literacy domains. It usually includes technical medical terms or jargon. Many fact sheets and other information require scientific literacy with or without specific numerical literacy. It may include graphs, percentages, timing, distance, or weights. Each additional word or concept in these domains decreases the readers' ability to meaningfully read, understand and know what to do with the information. An attempt to begin to read the heading where long or unfamiliar words are used will reduce the readers' motivation to attempt to read. Critically people who have low literacy are overwhelmed when pages are covered with lots of words. People attending a health environment may be unwell or anxious about a diagnosis. These make it harder to read, understand and know what to do with the information. But access to health information must consider an even wider range of adults, including those with poor school attainment, migrants, and refugees. It is also homeless people, people with mental health illnesses, or people who are ageing. People with low literacy also may include people with lifelong disabilities, people with acquired disabilities, people who read English as a second (or third) language, people who are Deaf, or people who are vision impaired. Outcome: This paper will discuss Easy English, which is developed for adults. It uses the audiences’ everyday words, short sentences, short words, and no jargon. It uses concrete language and concrete, specific images to support the text. It has been developed in Australia since the mid-2000s. This paper will showcase various projects in the health domain which use Easy English to improve the understanding and functional use of written information for the large numbers of adults in our communities who do not have the health literacy to manage a range of day to day reading tasks. See examples from consent forms, fact sheets and choice options, instructions, and other functional documents, where Easy English has been developed. This paper will ask individuals to reflect on their own work practice and consider what written information must be available in Easy English. It does not matter how cutting-edge a new treatment is; when adults can not read or understand what it is about and the positive and negative outcomes, they are less likely to be engaged in their own health journey.

Keywords: health literacy, inclusion, Easy English, communication

Procedia PDF Downloads 125
639 A Framework for Automated Nuclear Waste Classification

Authors: Seonaid Hume, Gordon Dobie, Graeme West

Abstract:

Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.

Keywords: nuclear decommissioning, radiation detection, object detection, waste classification

Procedia PDF Downloads 200
638 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 165
637 Emotions Aroused by Children’s Literature

Authors: Catarina Maria Neto da Cruz, Ana Maria Reis d'Azevedo Breda

Abstract:

Emotions are manifestations of everything that happens around us, influencing, consequently, our actions. People experience emotions continuously when socialize with friends, when facing complex situations, and when at school, among many other situations. Although the influence of emotions in the teaching and learning process is nothing new, its study in the academic field has been more popular in recent years, distinguishing between positive (e.g., enjoyment and curiosity) and negative emotions (e.g., boredom and frustration). There is no doubt that emotions play an important role in the students’ learning process since the development of knowledge involves thoughts, actions, and emotions. Nowadays, one of the most significant changes in acquiring knowledge, accessing information, and communicating is the way we do it through technological and digital resources. Faced with an increasingly frequent use of technological or digital means with different purposes, whether in the acquisition of knowledge or in communicating with others, the emotions involved in these processes change naturally. The speed with which the Internet provides information reduces the excitement for searching for the answer, the gratification of discovering something through our own effort, the patience, the capacity for effort, and resilience. Thus, technological and digital devices are bringing changes to the emotional domain. For this reason and others, it is essential to educate children from an early age to understand that it is not possible to have everything with just one click and to deal with negative emotions. Currently, many curriculum guidelines highlight the importance of the development of so-called soft skills, in which the emotional domain is present, in academic contexts. The technical report “OECD Survey on Social and Emotional Skills”, developed by OECD, is one of them. Within the scope of the Portuguese reality, the “Students’ profile by the end of compulsory schooling” and the “Health education reference” also emphasizes the importance of emotions in education. There are several resources to stimulate good emotions in articulation with cognitive development. One of the most predictable and not very used resources in the most diverse areas of knowledge after pre-school education is the literature. Due to its characteristics, in the narrative or in the illustrations, literature provides the reader with a journey full of emotions. On the other hand, literature makes it possible to establish bridges between narrative and different areas of knowledge, reconciling the cognitive and emotional domains. This study results from the presentation session of a children's book, entitled “From the Outside to Inside and from the Inside to Outside”, to children attending the 2nd, 3rd, and 4th years of basic education in the Portuguese education system. In this book, rationale and emotion are in constant dialogue, so in this session, based on excerpts from the book dramatized by the authors, some questions were asked to the children in a large group, with an aim to explore their perception regarding certain emotions or events that trigger them. According to the aim of this study, qualitative, descriptive, and interpretative research was carried out based on participant observation and audio records.

Keywords: emotions, basic education, children, soft skills

Procedia PDF Downloads 84
636 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System

Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski

Abstract:

Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.

Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals

Procedia PDF Downloads 377
635 The Invisibility of Production: A Comparative Study of the Marker of Modern Urban-Centric Economic Development

Authors: Arpita Banerjee

Abstract:

We now live in a world where half of the human population is city dwellers. The migration of people from rural to urban areas is rising continuously. But, the promise of a greater wage and better quality of life cannot keep up with the pace of migration. The rate of urbanization is much higher in developing countries. The UN predicts that 95 percent of this urban expansion will take place in the developing world in the next few decades. The population in the urban settlements of the developing nations is soaring, and megacities like Mumbai, Dhaka, Jakarta, Karachi, Manila, Shanghai, Rio de Janeiro, Lima, and Kinshasa are crammed with people, a majority of whom are migrants. Rural-urban migration has taken a new shape with the rising number of smaller cities. Apart from the increase in non-agricultural economic activities, growing demand for resources and energy, an increase in wastes and pollution, and a greater ecological footprint, there is another significant characteristic of the current wave of urbanization. This paper analyses that important marker of urbanization. It is the invisibility of production sites. The growing urban space ensures that the producers, the production sites, or the process stay beyond urban visibility. In cities and towns, living is majorly about earning money in either the informal service and small scale manufacturing sectors (a major part of which is food preparation), or the formal service sector. In the cases of both the informal service and small scale manufacturing or the formal service sector, commodity creation cannot be seen. The urban space happens to be the marketplace, where nature and its services, along with the non-urban labour, cannot be seen unless it is sold in the market. Hence, the consumers are now increasingly becoming disengaged from the producers. This paper compares the rate of increase in the size of and employment in the informal sector and/or that of the formal sector of some selected urban areas of India. Also, a comparison over the years of the aforementioned characteristics is presented in this paper, in order to find out how the anonymity of the producers to the urban consumers have grown as urbanization has risen. This paper also analyses the change in the transport cost of goods into the cities and towns of India and supports that claim made here that the invisibility of production is a crucial marker of modern-day urban-centric economic development. Such urbanization has an important ecological impact. The invisibility of the production site saves the urban consumer society from dealing with the ethical and ecological aspects of the production process. Once the real sector production is driven out of the cities and towns, the invisible ethical and ecological impacts of the growing urban consumption frees the consumers from associating themselves with any responsibility towards those impacts.

Keywords: ecological impact of urbanization, informal sector, invisibility of production, urbanization

Procedia PDF Downloads 130
634 “Uninformed” Religious Orientation Can Lead to Violence in Any Given Community: The Case of African Independence Churches in South Africa

Authors: Ngwako Daniel Sebola

Abstract:

Introductory Statement: Religions are necessary as they offer and teach something to their adherence. People in one religion may not have a complete understanding of the Supreme Being (Deity) in a certain religion other than their own. South Africa, like other countries in the world, consists of various religions, including Christianity. Almost 80% of South African population adheres to the Christian faith, though in different denominations and sects. Each church fulfils spiritual needs that perhaps others cannot fill. African Independent Churches is one of the denominations in the country. These churches arose as a protest to the Western forms and expressions of Christianity. Their major concern was to develop an indigenous expression of Christianity. The relevance of African Independent Churches includes addressing the needs of the people holistically. Controlling diseases was an important aspect of change in different historical periods. Through healing services, leaders of African churches are able to attract many followers. The healing power associated with the founders of many African Initiated Churches leads to people following and respecting them as true leaders within many African communities. Despite its strong points, African Independent Churches, like many others, face a variety of challenges, especially conflicts. Ironically, destructive conflicts resulted in violence.. Such violence demonstrates a lack of informed religious orientation among those concerned. This paper investigates and analyses the causes of conflict and violence in the African Independent Church. The researcher used the Shembe and International Pentecostal Holiness Churches, in South Africa, as a point of departure. As a solution to curb violence, the researcher suggests useful strategies in handling conflicts. Methodology: Comparative and qualitative approaches have been used as methods of collecting data in this research. The intention is to analyse the similarities and differences of violence among members of the Shembe and International Pentecostal Holiness Churches. Equally important, the researcher aims to obtain data through interviews, questionnaires, focus groups, among others. The researcher aims to interview fifteen individuals from both churches. Finding: Leadership squabbles and power struggle appear to be the main contributing factors of violence in many Independent Churches. Ironically, violence resulted in the loss of life and destruction of properties, like in the case of the Shembe and International Pentecostal Holiness Churches. Violence is an indication that congregations and some leaders have not been properly equipped to deal with conflict. Concluding Statement: Conflict is a common part of every human existence in any given community. The concern is when such conflict becomes contagious; it leads to violence. There is a need to understand consciously and objectively towards devising the appropriate measures to handle the conflict. Conflict management calls for emotional maturity, self-control, empathy, patience, tolerance and informed religious orientation.

Keywords: African, church, religion, violence

Procedia PDF Downloads 116
633 Influence of Gamma-Radiation Dosimetric Characteristics on the Stability of the Persistent Organic Pollutants

Authors: Tatiana V. Melnikova, Lyudmila P. Polyakova, Alla A. Oudalova

Abstract:

As a result of environmental pollution, the production of agriculture and foodstuffs inevitably contain residual amounts of Persistent Organic Pollutants (POP). The special attention must be given to organic pollutants, including various organochlorinated pesticides (OCP). Among priorities, OCP is DDT (and its metabolite DDE), alfa-HCH, gamma-HCH (lindane). The control of these substances spends proceeding from requirements of sanitary norms and rules. During too time often is lost sight of that the primary product can pass technological processing (in particular irradiation treatment) as a result of which transformation of physicochemical forms of initial polluting substances is possible. The goal of the present work was to study the OCP radiation degradation at a various gamma-radiation dosimetric characteristics. The problems posed for goal achievement: to evaluate the content of the priority of OCPs in food; study the character the degradation of OCP in model solutions (with micro concentrations commensurate with the real content of their agricultural and food products) depending upon dosimetric characteristics of gamma-radiation. Qualitative and quantitative analysis of OCP in food and model solutions by gas chromatograph Varian 3400 (Varian, Inc. (USA)); chromatography-mass spectrometer Varian Saturn 4D (Varian, Inc. (USA)) was carried out. The solutions of DDT, DDE, alpha- and gamma- isomer HCH (0.01, 0.1, 1 ppm) were irradiated on "Issledovatel" (60Co) and "Luch - 1" (60Co) installations at a dose 10 kGy with a variation of dose rate from 0.0083 up to 2.33 kGy/sec. It was established experimentally that OCP residual concentration in individual samples of food products (fish, milk, cereal crops, meat, butter) are evaluated as 10-1-10-4 mg/kg, the value of which depends on the factor-sensations territory and natural migration processes. The results were used in the preparation of model solutions OCP. The dependence of a degradation extent of OCP from a dose rate gamma-irradiation has complex nature. According to our data at a dose 10 kGy, the degradation extent of OCP at first increase passes through a maximum (over the range 0.23 – 0.43 Gy/sec), and then decrease with the magnification of a dose rate. The character of the dependence of a degradation extent of OCP from a dose rate is kept for various OCP, in polar and nonpolar solvents and does not vary at the change of concentration of the initial substance. Also in work conditions of the maximal radiochemical yield of OCP which were observed at having been certain: influence of gamma radiation with a dose 10 kGy, in a range of doses rate 0.23 – 0.43 Gy/sec; concentration initial OCP 1 ppm; use of solvent - 2-propanol after preliminary removal of oxygen. Based on, that at studying model solutions of OCP has been established that the degradation extent of pesticides and qualitative structure of OCP radiolysis products depend on a dose rate, has been decided to continue researches radiochemical transformations OCP into foodstuffs at various of doses rate.

Keywords: degradation extent, dosimetric characteristics, gamma-radiation, organochlorinated pesticides, persistent organic pollutants

Procedia PDF Downloads 249
632 Life Cycle Assessment of a Parabolic Solar Cooker

Authors: Bastien Sanglard, Lou Magnat, Ligia Barna, Julian Carrey, Sébastien Lachaize

Abstract:

Cooking is a primary need for humans, several techniques being used around the globe based on different sources of energy: electricity, solid fuel (wood, coal...), fuel or liquefied petroleum gas. However, all of them leads to direct or indirect greenhouse gas emissions and sometimes health damage in household. Therefore, the solar concentrated power represent a great option to lower the damages because of a cleaner using phase. Nevertheless, the construction phase of the solar cooker still requires primary energy and materials, which leads to environmental impacts. The aims of this work is to analyse the ecological impacts of a commercialaluminium parabola and to compare it with other means of cooking, taking the boiling of 2 litres of water three times a day during 40 years as the functional unit. Life cycle assessment was performed using the software Umberto and the EcoInvent database. Calculations were realized over more than 13 criteria using two methods: the international panel on climate change method and the ReCiPe method. For the reflector itself, different aluminium provenances were compared, as well as the use of recycled aluminium. For the structure, aluminium was compared to iron (primary and recycled) and wood. Results show that climate impacts of the studied parabola was 0.0353 kgCO2eq/kWh when built with Chinese aluminium and can be reduced by 4 using aluminium from Canada. Assessment also showed that using 32% of recycled aluminium would reduce the impact by 1.33 and 1.43 compared to the use of primary Canadian aluminium and primary Chinese aluminium, respectively. The exclusive use of recycled aluminium lower the impact by 17. Besides, the use of iron (recycled or primary) or wood for the structure supporting the reflector significantly lowers the impact. The impact categories of the ReCiPe method show that the parabola made from Chinese aluminium has the heaviest impact - except for metal resource depletion - compared to aluminium from Canada, recycled aluminium or iron. Impact of solar cooking was then compared to gas stove and induction. The gas stove model was a cast iron tripod that supports the cooking pot, and the induction plate was as well a single spot plate. Results show the parabolic solar cooker has the lowest ecological impact over the 13 criteria of the ReCiPe method and over the global warming potential compared to the two other technologies. The climate impact of gas cooking is 0.628kgCO2/kWh when used with natural gas and 0.723 kgCO2/kWh when used with a bottle of gas. In each case, the main part of emissions came from gas burning. Induction cooking has a global warming potential of 0.12 kgCO2eq/kWh with the electricity mix of France, 96.3% of the impact being due to electricity production. Therefore, the electricity mix is a key factor for this impact: for instance, with the electricity mix of Germany and Poland, impacts are 0.81kgCO2eq/kWh and 1.39 kgCO2eq/kWh, respectively. Therefore, the parabolic solar cooker has a real ecological advantages compared to both gas stove and induction plate.

Keywords: life cycle assessement, solar concentration, cooking, sustainability

Procedia PDF Downloads 184
631 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening

Authors: Partha Saha, Uttam Kumar Banerjee

Abstract:

Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.

Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries

Procedia PDF Downloads 258
630 Assessing Prescribed Burn Severity in the Wetlands of the Paraná River -Argentina

Authors: Virginia Venturini, Elisabet Walker, Aylen Carrasco-Millan

Abstract:

Latin America stands at the front of climate change impacts, with forecasts projecting accelerated temperature and sea level rises compared to the global average. These changes are set to trigger a cascade of effects, including coastal retreat, intensified droughts in some nations, and heightened flood risks in others. In Argentina, wildfires historically affected forests, but since 2004, wetland fires have emerged as a pressing concern. By 2021, the wetlands of the Paraná River faced a dangerous situation. In fact, during the year 2021, a high-risk scenario was naturally formed in the wetlands of the Paraná River, in Argentina. Very low water levels in the rivers, and excessive standing dead plant material (fuel), triggered most of the fires recorded in the vast wetland region of the Paraná during 2020-2021. During 2008 fire events devastated nearly 15% of the Paraná Delta, and by late 2021 new fires burned more than 300,000 ha of these same wetlands. Therefore, the goal of this work is to explore remote sensing tools to monitor environmental conditions and the severity of prescribed burns in the Paraná River wetlands. Thus, two prescribed burning experiments were carried out in the study area (31°40’ 05’’ S, 60° 34’ 40’’ W) during September 2023. The first experiment was carried out on Sept. 13th, in a plot of 0.5 ha which dominant vegetation were Echinochloa sp., and Thalia, while the second trial was done on Sept 29th in a plot of 0.7 ha, next to the first burned parcel; here the dominant vegetation species were Echinochloa sp. and Solanum glaucophyllum. Field campaigns were conducted between September 8th and November 8th to assess the severity of the prescribed burns. Flight surveys were conducted utilizing a DJI® Inspire II drone equipped with a Sentera® NDVI camera. Then, burn severity was quantified by analyzing images captured by the Sentera camera along with data from the Sentinel 2 satellite mission. This involved subtracting the NDVI images obtained before and after the burn experiments. The results from both data sources demonstrate a highly heterogeneous impact of fire within the patch. Mean severity values obtained with drone NDVI images of the first experience were about 0.16 and 0.18 with Sentinel images. For the second experiment, mean values obtained with the drone were approximately 0.17 and 0.16 with Sentinel images. Thus, most of the pixels showed low fire severity and only a few pixels presented moderated burn severity, based on the wildfire scale. The undisturbed plots maintained consistent mean NDVI values throughout the experiments. Moreover, the severity assessment of each experiment revealed that the vegetation was not completely dry, despite experiencing extreme drought conditions.

Keywords: prescribed-burn, severity, NDVI, wetlands

Procedia PDF Downloads 67
629 Strategies for Public Space Utilization

Authors: Ben Levenger

Abstract:

Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.

Keywords: downtown, economic development, planning, strategic

Procedia PDF Downloads 81
628 Exploring Empathy Through Patients’ Eyes: A Thematic Narrative Analysis of Patient Narratives in the UK

Authors: Qudsiya Baig

Abstract:

Empathy yields an unparalleled therapeutic value within patient physician interactions. Medical research is inundated with evidence to support that a physician’s ability to empathise with patients leads to a greater willingness to report symptoms, an improvement in diagnostic accuracy and safety, and a better adherence and satisfaction with treatment plans. Furthermore, the Institute of Medicine states that empathy leads to a more patient-centred care, which is one of the six main goals of a 21st century health system. However, there is a paradox between the theoretical significance of empathy and its presence, or lack thereof, in clinical practice. Recent studies have reported that empathy declines amongst students and physicians over time. The three most impactful contributors to this decline are: (1) disagreements over the definitions of empathy making it difficult to implement it into practice (2) poor consideration or regulation of empathy leading to burnout and thus, abandonment altogether, and (3) the lack of diversity in the curriculum and the influence of medical culture, which prioritises science over patient experience, limiting some physicians from using ‘too much’ empathy in the fear of losing clinical objectivity. These issues were investigated by conducting a fully inductive thematic narrative analysis of patient narratives in the UK to evaluate the behaviours and attitudes that patients associate with empathy. The principal enquiries underpinning this study included uncovering the factors that affected experience of empathy within provider-patient interactions and to analyse their effects on patient care. This research contributes uniquely to this discourse by examining the phenomenon of empathy directly from patients’ experiences, which were systematically extracted from a repository of online patient narratives of care titled ‘CareOpinion UK’. Narrative analysis was specifically chosen as the methodology to examine narratives from a phenomenological lens to focus on the particularity and context of each story. By enquiring beyond the superficial who-whatwhere, the study of narratives prescribed meaning to illness by highlighting the everyday reality of patients who face the exigent life circumstances created by suffering, disability, and the threat of life. The following six themes were found to be the most impactful in influencing the experience of empathy: dismissive behaviours, judgmental attitudes, undermining patients’ pain or concerns, holistic care and failures and successes of communication or language. For each theme there were overarching themes relating to either a failure to understand the patient’s perspective or a success in taking a person-centred approach. An in-depth analysis revealed that a lack of empathy was greatly associated with an emotive-cognitive imbalance, which disengaged physicians with their patients’ emotions. This study hereby concludes that competent providers require a combination of knowledge, skills, and more importantly empathic attitudes to help create a context for effective care. The crucial elements of that context involve (a) identifying empathy clues within interactions to engage with patients’ situations, (b) attributing a perspective to the patient through perspective-taking and (c) adapting behaviour and communication according to patient’s individual needs. Empathy underpins that context, as does an appreciation of narrative, and the two are interrelated.

Keywords: empathy, narratives, person-centred, perspective, perspective-taking

Procedia PDF Downloads 137
627 Correlation Between Different Radiological Findings and Histopathological diagnosis of Breast Diseases: Retrospective Review Conducted Over Sixth Years in King Fahad University Hospital in Eastern Province, Saudi Arabia

Authors: Sadeem Aljamaan, Reem Hariri, Rahaf Alghamdi, Batool Alotaibi, Batool Alsenan, Lama Althunayyan, Areej Alnemer

Abstract:

The aim of this study is to correlate between radiological findings and histopathological results in regard to the breast imaging-reporting and data system scores, size of breast masses, molecular subtypes and suspicious radiological features, as well as to assess the concordance rate in histological grade between core biopsy and surgical excision among breast cancer patients, followed by analyzing the change of concordance rate in relation to neoadjuvant chemotherapy in a Saudi population. A retrospective review was conducted over 6-year period (2017-2022) on all breast core biopsies of women preceded by radiological investigation. Chi-squared test (χ2) was performed on qualitative data, the Mann-Whitney test for quantitative non-parametric variables, and the Kappa test for grade agreement. A total of 641 cases were included. Ultrasound, mammography, and magnetic resonance imaging demonstrated diagnostic accuracies of 85%, 77.9% and 86.9%; respectively. magnetic resonance imaging manifested the highest sensitivity (72.2%), and the lowest was for ultrasound (61%). Concordance in tumor size with final excisions was best in magnetic resonance imaging, while mammography demonstrated a higher tendency of overestimation (41.9%), and ultrasound showed the highest underestimation (67.7%). The association between basal-like molecular subtypes and the breast imaging-reporting and data system score 5 classifications was statistically significant only for magnetic resonance imaging (p=0.04). Luminal subtypes demonstrated a significantly higher percentage of speculation in mammography. Breast imaging-reporting and data system score 4 manifested a substantial number of benign pathologies in all the 3 modalities. A fair concordance rate (k= 0.212 & 0.379) was demonstrated between excision and the preceding core biopsy grading with and without neoadjuvant therapy, respectively. The results demonstrated a down-grading in cases post-neoadjuvant therapy. In cases who did not receive neoadjuvant therapy, underestimation of tumor grade in biopsy was evident. In summary, magnetic resonance imaging had the highest sensitivity, specificity, positive predictive value and accuracy of both diagnosis and estimation of tumor size. Mammography demonstrated better sensitivity than ultrasound and had the highest negative predictive value, but ultrasound had better specificity, positive predictive value and accuracy. Therefore, the combination of different modalities is advantageous. The concordance rate of core biopsy grading with excision was not impacted by neoadjuvant therapy.

Keywords: breast cancer, mammography, MRI, neoadjuvant, pathology, US

Procedia PDF Downloads 82
626 Analysis of the Interests, Conflicts and Power Resources in the Urban Development in the Megacity of Sao Paulo

Authors: A. G. Back

Abstract:

Urban planning is a relevant tool to address, in a systemic way, several sectoral policies capable of linking the urban agenda with the reduction of socio-environmental risks. The Sao Paulo’s master plan (2014) presents innovations capable of promoting the transition to sustainability in the urban space, with a view to its regulatory instruments related to i) promotion of density in the axes of mass transport involving the mixture of commercial, residential, services, and leisure uses (principles related to the compact city); ii) vulnerabilities reduction based on housing policies including regular sources of funds for social housing and land reservation in urbanized areas; iii) reserve of green areas in the city to create parks and environmental regulations for new buildings focused on reducing the effects of heat island and improving urban drainage. However, its long-term implementation involves distributive conflicts and can undergo changes in different political, economic, and social contexts over time. Thus, the main objective of this paper is to identify and analyze the dynamics of conflicts of interest between social groups in the implementation of Sao Paulo’s urban development policy, particularly in relation to recent attempts at a (re) interpretation of the Master Plan guidelines, in view of the proposals for revision of the urban zoning law. In this sense, we seek to identify the demands, narratives of urban actors, including the real estate market, middle-class neighborhood associations ('not in my backyard' movements), and social housing rights movements. And we seek to analyze the power resources that these actors mobilize to influence the decision-making process, involving five categories: social capital, political access; discursive resource; media, juridical resource. The major findings of this research suggest that the interests and demands of the real estate market do not always prevail in urban regulation. After all, other actors also press for the definition of urban law with interests opposite to those of the real estate market. This is the case of associations of middle-class neighborhoods, which work to protect the characteristics of the locality, acting, in general, to prevent constructive and population densification in neighborhoods well located near the center, in São Paulo. One of the main demands of these “not in my backyard” movements is the delimitation of exclusively residential areas in the central region of the city, which is not only contrary to the interests of the real state market but also contrary to the principles of the compact city. On the other hand, social housing rights movements have also made progress in delimiting special areas of social interest in well-located and valued areas in the city dedicated to building social housing, also contrary to the interests of the real estate market. An urban development that follows the principles of the compact city must take into account the insertion of low-income populations in well-located regions; otherwise, such a development model may continue to push the less favored to the peripheries towards the preservation areas and/or risk areas.

Keywords: interest groups, Sao Paulo, sustainable urban development, urban policies implementation

Procedia PDF Downloads 110
625 Synergy Surface Modification for High Performance Li-Rich Cathode

Authors: Aipeng Zhu, Yun Zhang

Abstract:

The growing grievous environment problems together with the exhaustion of energy resources put urgent demands for developing high energy density. Considering the factors including capacity, resource and environment, Manganese-based lithium-rich layer-structured cathode materials xLi₂MnO₃⋅(1-x)LiMO₂ (M = Ni, Co, Mn, and other metals) are drawing increasing attention due to their high reversible capacities, high discharge potentials, and low cost. They are expected to be one type of the most promising cathode materials for the next-generation Li-ion batteries (LIBs) with higher energy densities. Unfortunately, their commercial applications are hindered with crucial drawbacks such as poor rate performance, limited cycle life and continuous falling of the discharge potential. With decades of extensive studies, significant achievements have been obtained in improving their cyclability and rate performances, but they cannot meet the requirement of commercial utilization till now. One major problem for lithium-rich layer-structured cathode materials (LLOs) is the side reaction during cycling, which leads to severe surface degradation. In this process, the metal ions can dissolve in the electrolyte, and the surface phase change can hinder the intercalation/deintercalation of Li ions and resulting in low capacity retention and low working voltage. To optimize the LLOs cathode material, the surface coating is an efficient method. Considering the price and stability, Al₂O₃ was used as a coating material in the research. Meanwhile, due to the low initial Coulombic efficiency (ICE), the pristine LLOs was pretreated by KMnO₄ to increase the ICE. The precursor was prepared by a facile coprecipitation method. The as-prepared precursor was then thoroughly mixed with Li₂CO₃ and calcined in air at 500℃ for 5h and 900℃ for 12h to produce Li₁.₂[Ni₀.₂Mn₀.₆]O₂ (LNMO). The LNMO was then put into 0.1ml/g KMnO₄ solution stirring for 3h. The resultant was filtered and washed with water, and dried in an oven. The LLOs obtained was dispersed in Al(NO₃)₃ solution. The mixture was lyophilized to confer the Al(NO₃)₃ was uniformly coated on LLOs. After lyophilization, the LLOs was calcined at 500℃ for 3h to obtain LNMO@LMO@ALO. The working electrodes were prepared by casting the mixture of active material, acetylene black, and binder (polyvinglidene fluoride) dissolved in N-methyl-2-pyrrolidone with a mass ratio of 80: 15: 5 onto an aluminum foil. The electrochemical performance tests showed that the multiple surface modified materials had a higher initial Coulombic efficiency (84%) and better capacity retention (91% after 100 cycles) compared with that of pristine LNMO (76% and 80%, respectively). The modified material suggests that the KMnO₄ pretreat and Al₂O₃ coating can increase the ICE and cycling stability.

Keywords: Li-rich materials, surface coating, lithium ion batteries, Al₂O₃

Procedia PDF Downloads 131
624 Processes Controlling Release of Phosphorus (P) from Catchment Soils and the Relationship between Total Phosphorus (TP) and Humic Substances (HS) in Scottish Loch Waters

Authors: Xiaoyun Hui, Fiona Gentle, Clemens Engelke, Margaret C. Graham

Abstract:

Although past work has shown that phosphorus (P), an important nutrient, may form complexes with aqueous humic substances (HS), the principal component of natural organic matter, the nature of such interactions is poorly understood. Humic complexation may not only enhance P concentrations but it may change its bioavailability within such waters and, in addition, influence its transport within catchment settings. This project is examining the relationships and associations of P, HS, and iron (Fe) in Loch Meadie, Sutherland, North Scotland, a mesohumic freshwater loch which has been assessed as reference condition with respect to P. The aim is to identify characteristic spectroscopic parameters which can enhance the performance of the model currently used to predict reference condition TP levels for highly-coloured Scottish lochs under the Water Framework Directive. In addition to Loch Meadie, samples from other reference condition lochs in north Scotland and Shetland were analysed. By including different types of reference condition lochs (clear water, mesohumic and polyhumic water) this allowed the relationship between total phosphorus (TP) and HS to be more fully explored. The pH, [TP], [Fe], UV/Vis absorbance/spectra, [TOC] and [DOC] for loch water samples have been obtained using accredited methods. Loch waters were neutral to slightly acidic/alkaline (pH 6-8). [TP] in loch waters were lower than 50 µg L-1, and in Loch Meadie waters were typically <10 µg L-1. [Fe] in loch waters were mainly <0.6 mg L-1, but for some loch water samples, [Fe] were in the range 1.0-1.8 mg L-1and there was a positive correlation with [TOC] (r2=0.61). Lochs were classified as clear water, mesohumic or polyhumic based on water colour. The range of colour values of sampled lochs in each category were 0.2–0.3, 0.2–0.5 and 0.5–0.8 a.u. (10 mm pathlength), respectively. There was also a strong positive correlation between [DOC] and water colour (R2=0.84). The UV/Vis spectra (200-700 nm) for water samples were featureless with only a slight “shoulder” observed in the 270–290 nm region. Ultrafiltration was then used to separate colloidal and truly dissolved components from the loch waters and, since it contained the majority of aqueous P and Fe, the colloidal component was fractionated by gel filtration chromatography method. Gel filtration chromatographic fractionation of the colloids revealed two brown-coloured bands which had distinctive UV/Vis spectral features. The first eluting band had larger and more aromatic HS molecules than the second band, and in addition both P and Fe were primarily associated with the larger, more aromatic HS. This result demonstrated that P was able to form complexes with Fe-rich components of HS, and thus provided a scientific basis for the significant correlation between [Fe] and [TP] that the previous monitoring data of reference condition lochs from Scottish Environment Protection Agency (SEPA) showed. The distinctive features of the HS will be used as the basis for an improved spectroscopic tool.

Keywords: total phosphorus, humic substances, Scottish loch water, WFD model

Procedia PDF Downloads 546
623 Satellite Connectivity for Sustainable Mobility

Authors: Roberta Mugellesi Dow

Abstract:

As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.

Keywords: sustainability, connectivity, mobility, satellites

Procedia PDF Downloads 133
622 Coupling Strategy for Multi-Scale Simulations in Micro-Channels

Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier

Abstract:

With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.

Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling

Procedia PDF Downloads 166
621 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 72
620 Variation of Lexical Choice and Changing Need of Identity Expression

Authors: Thapasya J., Rajesh Kumar

Abstract:

Language plays complex roles in society. The previous studies on language and society explain their interconnected, complementary and complex interactions and, those studies were primarily focused on the variations in the language. Variation being the fundamental nature of languages, the question of personal and social identity navigated through language variation and established that there is an interconnection between language variation and identity. This paper analyses the sociolinguistic variation in language at the lexical level and how the lexical choice of the speaker(s) affects in shaping their identity. It obtains primary data from the lexicon of the Mappila dialect of Malayalam spoken by the members of Mappila (Muslim) community of Kerala. The variation in the lexical choice is analysed by collecting data from the speech samples of 15 minutes from four different age groups of Mappila dialect speakers. Various contexts were analysed and the frequency of borrowed words in each instance is calculated to reach a conclusion on how the variation is happening in the speech community. The paper shows how the lexical choice of the speakers could be socially motivated and involve in shaping and changing identities. Lexical items or vocabulary clearly signal the group identity and personal identity. Mappila dialect of Malayalam was rich in frequent use of borrowed words from Arabic, Persian and Urdu. There was a deliberate attempt to show their identity as a Mappila community member, which was derived from the socio-political situation during those days. This made a clear variation between the Mappila dialect and other dialects of Malayalam at the surface level, which was motivated to create and establish the identity of a person as the member of Mappila community. Historically, these kinds of linguistic variation were highly motivated because of the socio-political factors and, intertwined with the historical facts about the origin and spread of Islamism in the region; people from the Mappila community highly motivated to project their identity as a Mappila because of the social insecurities they had to face before accepting that religion. Thus the deliberate inclusion of Arabic, Persian and Urdu words in their speech helped in showing their identity. However, the socio-political situations and factors at the origin of Mappila community have been changed over a period of time. The social motivation for indicating their identity as a Mappila no longer exist and thus the frequency of borrowed words from Arabic, Persian and Urdu have been reduced from their speech. Apart from the religious terms, the borrowed words from these languages are very few at present. The analysis is carried out by the changes in the language of the people according to their age and found to have significant variations between generations and literacy plays a major role in this variation process. The need of projecting a specific identity of an individual would vary according to the change in the socio-political scenario and a variation in language can shape the identity in order to go with the varying socio-political situation in any language.

Keywords: borrowings, dialect, identity, lexical choice, literacy, variation

Procedia PDF Downloads 237
619 How Holton’s Thematic Analysis Can Help to Understand Why Fred Hoyle Never Accepted Big Bang Cosmology

Authors: Joao Barbosa

Abstract:

After an intense dispute between the big bang cosmology and its big rival, the steady-state cosmology, some important experimental observations, such as the determination of helium abundance in the universe and the discovery of the cosmic background radiation in the 1960s were decisive for the progressive and wide acceptance of big bang cosmology and the inevitable abandonment of steady-state cosmology. But, despite solid theoretical support and those solid experimental observations favorable to big bang cosmology, Fred Hoyle, one of the proponents of the steady-state and the main opponent of the idea of the big bang (which, paradoxically, himself he baptized), never gave up and continued to fight for the idea of a stationary (or quasi-stationary) universe until the end of his life, even after decades of widespread consensus around the big bang cosmology. We can try to understand this persistent attitude of Hoyle by applying Holton’s thematic analysis to cosmology. Holton recognizes in the scientific activity a dimension that, even unconscious or not assumed, is nevertheless very important in the work of scientists, in implicit articulation with the experimental and the theoretical dimensions of science. This is the thematic dimension, constituted by themata – concepts, methodologies, and hypotheses with a metaphysical, aesthetic, logical, or epistemological nature, associated both with the cultural context and the individual psychology of scientists. In practice, themata can be expressed through personal preferences and choices that guide the individual and collective work of scientists. Thematic analysis shows that big bang cosmology is mainly based on a set of themata consisting of evolution, finitude, life cycle, and change; the cosmology of the steady-state is based on opposite themata: steady-state, infinity, continuous existence, and constancy. The passionate controversy that these cosmological views carried out is part of an old cosmological opposition: the thematic opposition between an evolutionary view of the world (associated with Heraclitus) and a stationary view (associated with Parmenides). Personal preferences seem to have been important in this (thematic) controversy, and the thematic analysis that was developed shows that Hoyle is a very illustrative example of a life-long personal commitment to some themata, in this case to the opposite themata of the big bang cosmology. His struggle against the big bang idea was strongly based on philosophical and even religious reasons – which, in a certain sense and in a Holtonian perspective, is related to thematic preferences. In this personal and persistent struggle, Hoyle always refused the way how some experimental observations were considered decisive in favor of the big bang idea, arguing that the success of this idea is based on sociological and cultural prejudices. This Hoyle’s attitude is a personal thematic attitude, in which the acceptance or rejection of what is presented as proof or scientific fact is conditioned by themata: what is a proof or a scientific fact for one scientist is something yet to be established for another scientist who defends different or even opposites themata.

Keywords: cosmology, experimental observations, fred hoyle, interpretation, life-long personal commitment, Themata

Procedia PDF Downloads 168
618 Efficacy of CAM Methods for Pain Reduction in Acute Non-specific Lower Back Pain

Authors: John Gaber

Abstract:

Objectives: Complementary and alternative medicine (CAM) is a medicine or health practice that is used alongside conventional practice. Nowadays, CAM is commonly used in North America and other countries, and there is a need for more scientific study to understand its efficacy in different clinical cases. This retrospective study explores the effectiveness and recovery time of CAMs such as cupping, acupuncture, and sotai to treat cases of non-specific low back pain (ANLBP). Methods: We assessed the effectiveness of acupuncture, cupping, and sotai methods on pain and for the treatment of ANLBP. We have compared the magnitude of pain relief using a pain scale assessment method to compare the efficacy of each treatment. The Face Pain Scale assessment was conducted before and 24 hours post-treatment. This retrospective study analyzed 40 patients and categorized them according to the treatment they received. The study included the control group, and the three intervention groups, each with ten patients. Each of the three intervention groups received one of the intervention methods. The first group received the cupping treatment, where cups were placed on the lower back of both sides on points: BL23, BL25, BL26, BL54, BL37, BL40, and BL57. After vacuuming, the cups will stay for 10-15 minutes under infrared light (IR) heating. IR heating is applied by an infrared heat lamp. The second group received the acupuncture treatment, placing needles on points: BL23, BL25, BL26, BL52BL54, GB30, BL37, BL40, BL57, BL59, BL60, and KI3. The needles will be simulated with IR light. The final group received the sotai treatment, a Japanese form of structural realignment that relieves pain, balance, and mobility -moving the body naturally and spontaneously towards a comfortable direction by focusing on the inner feeling and synchronizing with the patient’s breathing. The SPSS statistical software was used to analyze the data using repeated-measures ANOVA. The data collected demonstrates the change in the FPS assessment method value over the course of treatment. p<0.05 was considered statistically significant. Results: In the cupping, acupuncture, and sotai therapy groups, the mean of the FPS value reduced from 8.7±1.2, 8.8±1.2, 9.0±0.8 before the intervention to 3.5±1.4, 4.3±1.4, 3.3±1.3, 24 hours after the intervention, respectively. The data collected shows that the CAM methods included in this study all show improvements in pain relief 24 hours after treatment. Conclusion: Complementary and alternative medicine were developed to treat injuries and illnesses with the whole body in mind, designed to be used in addition to standard treatments. The data above shows that the use of these treatments can have a pain-relieving effect, but more research should be done on the matter, as finding CAM methods that are efficacious is crucial in the landscape of health sciences.

Keywords: acupuncture, cupping, alternative medicine, rehabilitation, acute injury

Procedia PDF Downloads 56
617 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 232
616 Land, History and Housing: Colonial Legacies and Land Tenure in Kuala Lumpur

Authors: Nur Fareza Mustapha

Abstract:

Solutions to policy problems need to be curated to the local context, taking into account the trajectory of the local development path to ensure its efficacy. For Kuala Lumpur, rapid urbanization and migration into the city for the past few decades have increased the demand for housing to accommodate a growing urban population. As a critical factor affecting housing affordability, land supply constraints have been attributed to intensifying market pressures, which grew in tandem with the demands of urban development, along with existing institutional constraints in the governance of land. While demand-side pressures are inevitable given the fixed supply of land, supply-side constraints in regulations distort markets and if addressed inappropriately, may lead to mistargeted policy interventions. Given Malaysia’s historical development, regulatory barriers for land may originate from the British colonial period, when many aspects of the current laws governing tenure were introduced and formalized, and henceforth, became engrained in the system. This research undertakes a postcolonial institutional analysis approach to uncover the causal mechanism driving the evolution of land tenure systems in post-colonial Kuala Lumpur. It seeks to determine the sources of these shifts, focusing on the incentives and bargaining positions of actors during periods of institutional flux/change. It aims to construct a conceptual framework to further this understanding and to elucidate how this historical trajectory affects current access to urban land markets for housing. Archival analysis is used to outline and analyse the evolution of land tenure systems in Kuala Lumpur while stakeholder interviews are used to analyse its impact on the current urban land market, with a particular focus on the provision of and access to affordable housing in the city. Preliminary findings indicate that many aspects of the laws governing tenure that were introduced and formalized during the British colonial period have endured until the present day. Customary rules of tenure were displaced by rules following a European tradition, which found legitimacy through a misguided interpretation of local laws regarding the ownership of land. Colonial notions of race and its binary view of native vs. non-natives have also persisted in the construction and implementation of current legislation regarding land tenure. More concrete findings from this study will generate a more nuanced understanding of the regulatory land supply constraints in Kuala Lumpur, taking into account both the long and short term spatial and temporal processes that affect how these rules are created, implemented and enforced.

Keywords: colonial discourse, historical institutionalism, housing, land policy, post-colonial city

Procedia PDF Downloads 128
615 The Impact of Glass Additives on the Functional and Microstructural Properties of Sand-Lime Bricks

Authors: Anna Stepien

Abstract:

The paper presents the results of research on modifications of sand-lime bricks, especially using glass additives (glass fiber and glass sand) and other additives (e.g.:basalt&barite aggregate, lithium silicate and microsilica) as well. The main goal of this paper is to answer the question ‘How to use glass additives in the sand-lime mass and get a better bricks?’ The article contains information on modification of sand-lime bricks using glass fiber, glass sand, microsilica (different structure of silica). It also presents the results of the conducted compression tests, which were focused on compressive strength, water absorption, bulk density, and their microstructure. The Scanning Electron Microscope, spectrum EDS, X-ray diffractometry and DTA analysis helped to define the microstructural changes of modified products. The interpretation of the products structure revealed the existence of diversified phases i.e.the C-S-H and tobermorite. CaO-SiO2-H2O system is the object of intensive research due to its meaning in chemistry and technologies of mineral binding materials. Because the blocks are the autoclaving materials, the temperature of hydrothermal treatment of the products is around 200°C, the pressure - 1,6-1,8 MPa and the time - up to 8hours (it means: 1h heating + 6h autoclaving + 1h cooling). The microstructure of the products consists mostly of hydrated calcium silicates with a different level of structural arrangement. The X-ray diffraction indicated that the type of used sand is an important factor in the manufacturing of sand-lime elements. Quartz sand of a high hardness is also a substrate hardly reacting with other possible modifiers, which may cause deterioration of certain physical and mechanical properties. TG and DTA curves show the changes in the weight loss of the sand-lime bricks specimen against time as well as the endo- and exothermic reactions that took place. The endothermic effect with the maximum at T=573°C is related to isomorphic transformation of quartz. This effect is not accompanied by a change of the specimen weight. The next endothermic effect with the maximum at T=730-760°C is related to the decomposition of the calcium carbonates. The bulk density of the brick it is 1,73kg/dm3, the presence of xonotlite in the microstructure and significant weight loss during DTA and TG tests (around 0,6% after 70 minutes) have been noticed. Silicate elements were assessed on the basis of their compressive property. Orthogonal compositional plan type 3k (with k=2), i.e.full two-factor experiment was applied in order to carry out the experiments both, in the compression strength test and bulk density test. Some modification (e.g.products with barite and basalt aggregate) have improved the compressive strength around 41.3 MPa and water absorption due to capillary raising have been limited to 12%. The next modification was adding glass fiber to sand-lime mass, then glass sand. The results show that the compressive strength was higher than in the case of traditional bricks, while modified bricks were lighter.

Keywords: bricks, fiber, glass, microstructure

Procedia PDF Downloads 347
614 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks

Authors: Shaleeza Sohail

Abstract:

Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.

Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system

Procedia PDF Downloads 187
613 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements

Authors: Miria Gambardella

Abstract:

The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.

Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army

Procedia PDF Downloads 87
612 Effect of 12 Weeks Pedometer-Based Workplace Program on Inflammation and Arterial Stiffness in Young Men with Cardiovascular Risks

Authors: Norsuhana Omar, Amilia Aminuddina Zaiton Zakaria, Raifana Rosa Mohamad Sattar, Kalaivani Chellappan, Mohd Alauddin Mohd Ali, Norizam Salamt, Zanariyah Asmawi, Norliza Saari, Aini Farzana Zulkefli, Nor Anita Megat Mohd. Nordin

Abstract:

Inflammation plays an important role in the pathogenesis of vascular dysfunction leading to arterial stiffness. Pulse wave velocity (PWV) and augmentation index (AS), as tools for the assessment of vascular damages are widely used and have been shown to predict cardiovascular disease (CVD). C-reactive protein (CRP) is a marker of inflammation. Several studies noted that regular exercise is associated with reduced arterial stiffness. The lack of exercise among Malaysians and the increasing CVD morbidity and mortality among young men are of concern. In Malaysia data on the workplace exercise intervention is scarce. A programme was designed to enable subjects to increase their level of walking as part of their daily work routine and self-monitored by using pedometers. The aim of this study to evaluate the reducing of inflammation by measuring CRP and improvement arterial stiffness measured by carotid femoral PWV (PWVCF) and AI. A total of 70 young men (20 - 40 years) who were sedentary, achieving less than 5,000 steps/day in casual walking with 2 or more cardiovascular risk factors were recruited in Institute of Vocational Skills for Youth (IKBN Hulu Langat). Subjects were randomly assigned to a control (CG) (n=34; no change in walking) and pedometer group (PG) (n=36; minimum target: 8,000 steps/day). The CRP was measured by using immunological method while PWVCF and AI were measured using Vicorder. All parameters were measured at baseline and after 12 weeks. Data for analysis was conducted using Statistical Package of Social Sciences Version 22 (SPSS Inc., Chicago, IL, USA). At post intervention, the CG step counts were similar (4983 ± 366vs 5697 ± 407steps/day). The PG increased step count from 4996 ± 805 to 10,128 ±511 steps/day (P<0.001). The PG showed significant improvement in anthropometric variables and lipid (time and group effect p<0.001). For vascular assessment, the PG showed significantly decreased for time and effect (p<0.001) for PWV (7.21± 0.83 to 6.42 ± 0.89) m/s; AI (11.88± 6.25 to 8.83 ± 3.7) % and CRP (pre= 2.28 ± 3.09, post=1.08± 1.37mg/L). However, no changes were seen in CG. As a conclusion, a pedometer-based walking programme may be an effective strategy for promoting increased daily physical activity which reduces cardiovascular risk markers and thus improve cardiovascular health in terms of inflammation and arterial stiffness. The community intervention for health maintenance has potential to adopt walking as an exercise and adopting vascular fitness index as the performance measuring tools.

Keywords: arterial stiffness, exercise, inflammation, pedometer

Procedia PDF Downloads 353
611 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 280