Search results for: estimation purposes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3531

Search results for: estimation purposes

291 Role of Calcination Treatment on the Structural Properties and Photocatalytic Activity of Nanorice N-Doped TiO₂ Catalyst

Authors: Totsaporn Suwannaruang, Kitirote Wantala

Abstract:

The purposes of this research were to synthesize titanium dioxide photocatalyst doped with nitrogen (N-doped TiO₂) by hydrothermal method and to test the photocatalytic degradation of paraquat under UV and visible light illumination. The effect of calcination treatment temperature on their physical and chemical properties and photocatalytic efficiencies were also investigated. The characterizations of calcined N-doped TiO₂ photocatalysts such as specific surface area, textural properties, bandgap energy, surface morphology, crystallinity, phase structure, elements and state of charges were investigated by Brunauer, Emmett, Teller (BET) and Barrett, Joyner, Halenda (BJH) equations, UV-Visible diffuse reflectance spectroscopy (UV-Vis-DRS) by using the Kubelka-Munk theory, Wide-angle X-ray scattering (WAXS), Focussed ion beam scanning electron microscopy (FIB-SEM), X-ray photoelectron spectroscopy (XPS) and X-ray absorption spectroscopy (XAS), respectively. The results showed that the effect of calcination temperature was significant on surface morphology, crystallinity, specific surface area, pore size diameter, bandgap energy and nitrogen content level, but insignificant on phase structure and oxidation state of titanium (Ti) atom. The N-doped TiO₂ samples illustrated only anatase crystalline phase due to nitrogen dopant in TiO₂ restrained the phase transformation from anatase to rutile. The samples presented the nanorice-like morphology. The expansion on the particle was found at 650 and 700°C of calcination temperature, resulting in increased pore size diameter. The bandgap energy was determined by Kubelka-Munk theory to be in the range 3.07-3.18 eV, which appeared slightly lower than anatase standard (3.20 eV), resulting in the nitrogen dopant could modify the optical absorption edge of TiO₂ from UV to visible light region. The nitrogen content was observed at 100, 300 and 400°C only. Also, the nitrogen element disappeared at 500°C onwards. The nitrogen (N) atom can be incorporated in TiO₂ structure with the interstitial site. The uncalcined (100°C) sample displayed the highest percent paraquat degradation under UV and visible light irradiation due to this sample revealed both the highest specific surface area and nitrogen content level. Moreover, percent paraquat removal significantly decreased with increasing calcination treatment temperature. The nitrogen content level in TiO₂ accelerated the rate of reaction with combining the effect of the specific surface area that generated the electrons and holes during illuminated with light. Therefore, the specific surface area and nitrogen content level demonstrated the important roles in the photocatalytic activity of paraquat under UV and visible light illumination.

Keywords: restraining phase transformation, interstitial site, chemical charge state, photocatalysis, paraquat degradation

Procedia PDF Downloads 157
290 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 86
289 Trade in Value Added: The Case of the Central and Eastern European Countries

Authors: Łukasz Ambroziak

Abstract:

Although the impact of the production fragmentation on trade flows has been examined many times since the 1990s, the research was not comprehensive because of the limitations in traditional trade statistics. Early 2010s the complex databases containing world input-output tables (or indicators calculated on their basis) has made available. It increased the possibilities of examining the production sharing in the world. The trade statistic in value-added terms enables us better to estimate trade changes resulted from the internationalisation and globalisation as well as benefits of the countries from international trade. In the literature, there are many research studies on this topic. Unfortunately, trade in value added of the Central and Eastern European Countries (CEECs) has been so far insufficiently studied. Thus, the aim of the paper is to present changes in value added trade of the CEECs (Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia) in the period of 1995-2011. The concept 'trade in value added' or 'value added trade' is defined as the value added of a country which is directly and indirectly embodied in final consumption of another country. The typical question would be: 'How much value added is created in a country due to final consumption in the other countries?' The data will be downloaded from the World Input-Output Database (WIOD). The structure of this paper is as follows. First, theoretical and methodological aspects related to the application of the input-output tables in the trade analysis will be studied. Second, a brief survey of the empirical literature on this topic will be presented. Third, changes in exports and imports in value added of the CEECs will be analysed. A special attention will be paid to the differences in bilateral trade balances using traditional trade statistics (in gross terms) on one side, and value added statistics on the other. Next, in order to identify factors influencing value added exports and value added imports of the CEECs the generalised gravity model, based on panel data, will be used. The dependent variables will be value added exports and imports. The independent variables will be, among others, the level of GDP of trading partners, the level of GDP per capita of trading partners, the differences in GDP per capita, the level of the FDI inward stock, the geographical distance, the existence (or non-existence) of common border, the membership (or not) in preferential trade agreements or in the EU. For comparison, an estimation will also be made based on exports and imports in gross terms. The initial research results show that the gravity model better explained determinants of trade in value added than gross trade (R2 in the former is higher). The independent variables had the same direction of impact both on value added exports/imports and gross exports/imports. Only value of coefficients differs. The most difference concerned geographical distance. It had smaller impact on trade in value added than gross trade.

Keywords: central and eastern European countries, gravity model, input-output tables, trade in value added

Procedia PDF Downloads 239
288 The Implication of Small Group Therapy on Sexuality in Breast Cancer Survivors

Authors: Cherng-Jye Jeng, Ming-Feng Hou, Hsing-Yuan Liu, Chuan-Feng Chang, Lih-Rong Wang, Yen-Chin Lin

Abstract:

Introduction: The incidence of breast cancer has gradually increased in Taiwan, and the characteristic of younger ages impact these women in their middle age, and may also cause challenges in terms of family, work, and illness. Breasts are symbols of femininity, as well as of sex. For women, breasts are important organs for the female identity and sexual expression. Losing breasts not only affects the female role, but would also affect sexual attraction and sexual desire. Thus, women with breast cancer who have need for mastectomies experience physical incompletion, which affects women’s self-confidence, physical image, and self-orientation. Purposes: 1. To understand the physical experience of women with breast cancer. 2. To explore the issue of sexual issues on the health effects of women with breast cancer. 3. To construct a domestic sex life issue group model for domestic women with breast cancer. 4. To explore the accompaniment experiences and sexual relationship adjustments of spouses when women have breast cancer. Method: After the research plan passes IRB review, participants will be recruited at breast surgery clinic in the affiliated hospital, to screen suitable subjects for entry into the group. Between March and May 2015, two sexual health and sex life consultation groups were conducted, which were (1) 10 in postoperative groups for women with cancer; (2) 4 married couples group for postoperative women with cancer. After sharing experiences and dialogue, women can achieve mutual support and growth. Data organization and analysis underwent descriptive analysis in qualitative research, and the group process was transcribed into transcripts for overall-content and category-content analysis. Results: Ten women with breast cancer believed that participating in group can help them exchange experiences, and elevate sexual health. The main issues include: (1) after breast cancer surgery, patients generally received chemotherapy or estrogen suppressants, causing early menopause; in particular, vaginal dryness can cause pain or bleeding in intercourse, reducing their desire for sexual activity; (2) breast cancer accentuates original spousal or family and friend relationships; some people have support and care from their family, and spouses emphasize health over the appearance of breasts; however, some people do not have acceptance and support from their family, and some even hear spousal sarcasm about loss of breasts; (3) women with breast cancer have polarized expressions of optimism and pessimism in regards to their emotions, beliefs, and body image regarding cancer; this is related to the women’s original personalities, attribution of causes of cancer, and extent of worry about relapse. Conclusion: The research results can be provided as a reference to medical institutions or breast cancer volunteer teams, to pay attention to maintaining the health of women with breast cancer.

Keywords: women with breast cancer, experiences of objectifying the body, quality of sex life, sexual health

Procedia PDF Downloads 319
287 Transforming Ganges to be a Living River through Waste Water Management

Authors: P. M. Natarajan, Shambhu Kallolikar, S. Ganesh

Abstract:

By size and volume of water, Ganges River basin is the biggest among the fourteen major river basins in India. By Hindu’s faith, it is the main ‘holy river’ in this nation. But, of late, the pollution load, both domestic and industrial sources are deteriorating the surface and groundwater as well as land resources and hence the environment of the Ganges River basin is under threat. Seeing this scenario, the Indian government began to reclaim this river by two Ganges Action Plans I and II since 1986 by spending Rs. 2,747.52 crores ($457.92 million). But the result was no improvement in the water quality of the river and groundwater and environment even after almost three decades of reclamation, and hence now the New Indian Government is taking extra care to rejuvenate this river and allotted Rs. 2,037 cores ($339.50 million) in 2014 and Rs. 20,000 crores ($3,333.33 million) in 2015. The reasons for the poor water quality and stinking environment even after three decades of reclamation of the river are either no treatment/partial treatment of the sewage. Hence, now the authors are suggesting a tertiary level treatment standard of sewages of all sources and origins of the Ganges River basin and recycling the entire treated water for nondomestic uses. At 20million litres per day (MLD) capacity of each sewage treatment plant (STP), this basin needs about 2020 plants to treat the entire sewage load. Cost of the STPs is Rs. 3,43,400 million ($5,723.33 million) and the annual maintenance cost is Rs. 15,352 million ($255.87 million). The advantages of the proposed exercise are: we can produce a volume of 1,769.52 million m3 of biogas. Since biogas is energy, can be used as a fuel, for any heating purpose, such as cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat. It is possible to generate about 3,539.04 million kilowatt electricity per annum from the biogas generated in the process of wastewater treatment in Ganges basin. The income generation from electricity works out to Rs 10,617.12million ($176.95million). This power can be used to bridge the supply and demand gap of energy in the power hungry villages where 300million people are without electricity in India even today, and to run these STPs as well. The 664.18 million tonnes of sludge generated by the treatment plants per annum can be used in agriculture as manure with suitable amendments. By arresting the pollution load the 187.42 cubic kilometer (km3) of groundwater potential of the Ganges River basin could be protected from deterioration. Since we can recycle the sewage for non-domestic purposes, about 14.75km3 of fresh water per annum can be conserved for future use. The total value of the water saving per annum is Rs.22,11,916million ($36,865.27million) and each citizen of Ganges River basin can save Rs. 4,423.83/ ($73.73) per annum and Rs. 12.12 ($0.202) per day by recycling the treated water for nondomestic uses. Further the environment of this basin could be kept clean by arresting the foul smell as well as the 3% of greenhouse gages emission from the stinking waterways and land. These are the ways to reclaim the waterways of Ganges River basin from deterioration.

Keywords: Holy Ganges River, lifeline of India, wastewater treatment and management, making Ganges permanently holy

Procedia PDF Downloads 285
286 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 23
285 Reagentless Detection of Urea Based on ZnO-CuO Composite Thin Film

Authors: Neha Batra Bali, Monika Tomar, Vinay Gupta

Abstract:

A reagentless biosensor for detection of urea based on ZnO-CuO composite thin film is presented in following work. Biosensors have immense potential for varied applications ranging from environmental to clinical testing, health care, and cell analysis. Immense growth in the field of biosensors is due to the huge requirement in today’s world to develop techniques which are both cost effective and accurate for prevention of disease manifestation. The human body comprises of numerous biomolecules which in their optimum levels are essential for functioning. However mismanaged levels of these biomolecules result in major health issues. Urea is one of the key biomolecules of interest. Its estimation is of paramount significance not only for healthcare sector but also from environmental perspectives. If level of urea in human blood/serum is abnormal, i.e., above or below physiological range (15-40mg/dl)), it may lead to diseases like renal failure, hepatic failure, nephritic syndrome, cachexia, urinary tract obstruction, dehydration, shock, burns and gastrointestinal, etc. Various metal nanoparticles, conducting polymer, metal oxide thin films, etc. have been exploited to act as matrix to immobilize urease to fabricate urea biosensor. Amongst them, Zinc Oxide (ZnO), a semiconductor metal oxide with a wide band gap is of immense interest as an efficient matrix in biosensors by virtue of its natural abundance, biocompatibility, good electron communication feature and high isoelectric point (9.5). In spite of being such an attractive candidate, ZnO does not possess a redox couple of its own which necessitates the use of electroactive mediators for electron transfer between the enzyme and the electrode, thereby causing hindrance in realization of integrated and implantable biosensor. In the present work, an effort has been made to fabricate a matrix based on ZnO-CuO composite prepared by pulsed laser deposition (PLD) technique in order to incorporate redox properties in ZnO matrix and to utilize the same for reagentless biosensing applications. The prepared bioelectrode Urs/(ZnO-CuO)/ITO/glass exhibits high sensitivity (70µAmM⁻¹cm⁻²) for detection of urea (5-200 mg/dl) with high stability (shelf life ˃ 10 weeks) and good selectivity (interference ˂ 4%). The enhanced sensing response obtained for composite matrix is attributed to the efficient electron exchange between ZnO-CuO matrix and immobilized enzymes, and subsequently fast transfer of generated electrons to the electrode via matrix. The response is encouraging for fabricating reagentless urea biosensor based on ZnO-CuO matrix.

Keywords: biosensor, reagentless, urea, ZnO-CuO composite

Procedia PDF Downloads 290
284 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life

Authors: Sandra Young

Abstract:

The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.

Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics

Procedia PDF Downloads 137
283 Social Value of Travel Time Savings in Sub-Saharan Africa

Authors: Richard Sogah

Abstract:

The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.

Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa

Procedia PDF Downloads 108
282 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.

Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate

Procedia PDF Downloads 520
281 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa

Authors: Tashinga Matindike

Abstract:

By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.

Keywords: digital art, digitisation, technology, archiving, transformation and repatriation

Procedia PDF Downloads 52
280 Corporate In-Kind Donations and Economic Efficiency: The Case of Surplus Food Recovery and Donation

Authors: Sedef Sert, Paola Garrone, Marco Melacini, Alessandro Perego

Abstract:

This paper is aimed at enhancing our current understanding of motivations behind corporate in-kind donations and to find out whether economic efficiency may be a major driver. Our empirical setting is consisted of surplus food recovery and donation by companies from food supply chain. This choice of empirical setting is motivated by growing attention on the paradox of food insecurity and food waste i.e. a total of 842 million people worldwide were estimated to be suffering from regularly not getting enough food, while approximately 1.3 billion tons per year food is wasted globally. Recently, many authors have started considering surplus food donation to nonprofit organizations as a way to cope with social issue of food insecurity and environmental issue of food waste. In corporate philanthropy literature the motivations behind the corporate donations for social purposes, such as altruistic motivations, enhancements to employee morale, the organization’s image, supplier/customer relationships, local community support, have been examined. However, the relationship with economic efficiency is not studied and in many cases the pure economic efficiency as a decision making factor is neglected. Although in literature there are some studies give us the clue on economic value creation of surplus food donation such as saving landfill fees or getting tax deductions, so far there is no study focusing deeply on this phenomenon. In this paper, we develop a conceptual framework which explores the economic barriers and drivers towards alternative surplus food management options i.e. discounts, secondary markets, feeding animals, composting, energy recovery, disposal. The case study methodology is used to conduct the research. Protocols for semi structured interviews are prepared based on an extensive literature review and adapted after expert opinions. The interviews are conducted mostly with the supply chain and logistics managers of 20 companies in food sector operating in Italy, in particular in Lombardy region. The results shows that in current situation, the food manufacturing companies can experience cost saving by recovering and donating the surplus food with respect to other methods especially considering the disposal option. On the other hand, retail and food service sectors are not economically incentivized to recover and donate surplus food to disfavored population. The paper shows that not only strategic and moral motivations, but also economic motivations play an important role in managerial decision making process in surplus food management. We also believe that our research while rooted in the surplus food management topic delivers some interesting implications to more general research on corporate in-kind donations. It also shows that there is a huge room for policy making favoring the recovery and donation of surplus products.

Keywords: corporate philanthropy, donation, recovery, surplus food

Procedia PDF Downloads 312
279 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors

Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov

Abstract:

It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.

Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development

Procedia PDF Downloads 245
278 Exogenous Application of Silicon through the Rooting Medium Modulate Growth, Ion Uptake, and Antioxidant Activity of Barley (Hordeum vulgare L.) Under Salt Stress

Authors: Sibgha Noreen, Muhammad Salim Akhter, Seema Mahmood

Abstract:

Salt stress is an abiotic stress that causes a heavy toll on growth and development and also reduces the productivity of arable and horticultural crops. Globally, a quarter of total arable land has fallen prey to this menace, and more is being encroached because of the usage of brackish water for irrigation purposes. Though barley is categorized as salt-tolerant crop, but cultivars show a wide genetic variability in response to it. In addressing salt stress, silicon nutrition would be a facile tool for enhancing salt tolerant to sustain crop production. A greenhouse study was conducted to evaluate the response of barley (Hordeum vulgare L.) cultivars to silicon nutrition under salt stress. The treatments included [(a) four barley cultivars (Jou-87, B-14002, B-14011, B-10008); (b) two salt levels (0, 200 mM, NaCl); and (c) two silicon levels (0, 200ppm, K2SiO3. nH2O), arranged in a factorial experiment in a completely randomized design with 16 treatments and repeated 4 times. Plants were harvested at 15 days after exposure to different experimental salinity and silicon foliar conditions. Results revealed that various physiological and biochemical attributes differed significantly (p<0.05) in response to different treatments and their interactive effects. Cultivar “B-10008” excelled in biological yield, chlorophyll constituents, antioxidant enzymes, and grain yield compared to other cultivars. The biological yield of shoot and root organs was reduced by 27.3 and 26.5 percent under salt stress, while it was increased by 14.5 and 18.5 percent by exogenous application of silicon over untreated check, respectively. The imposition of salt stress at 200 mM caused a reduction in total chlorophyll content, chl ‘a’ , ‘b’ and ratio a/b by 10.6,16.8,17.1 and 7.1, while spray of 200 ppm silicon improved the quantum of the constituents by 10.4,12.1,10.2,10.3 over untreated check, respectively. The quantum of free amino acids and protein content was enhanced in response to salt stress and the spray of silicon nutrients. The amounts of superoxide dismutase, catalases, peroxidases, hydrogen peroxide, and malondialdehyde contents rose to 18.1, 25.7, 28.1, 29.5, and 17.6 percent over non-saline conditions under salt stress. However, the values of these antioxidants were reduced in proportion to salt stress by 200 ppm silicon applied as rooting medium on barley crops. The salt stress caused a reduction in the number of tillers, number of grains per spike, and 100-grain weight to the amount of 29.4, 8.6, and 15.8 percent; however, these parameters were improved by 7.1, 10.3, and 9.6 percent by foliar spray of silicon over untreated crop, respectively. It is concluded that the barley cultivar “B-10008” showed greater tolerance and adaptability to saline conditions. The yield of barley crops could be potentiated by a foliar spray of 200 ppm silicon at the vegetative growth stage under salt stress.

Keywords: salt stress, silicon nutrition, chlorophyll constituents, antioxidant enzymes, barley crop

Procedia PDF Downloads 38
277 Change of Education Business in the Age of 5G

Authors: Heikki Ruohomaa, Vesa Salminen

Abstract:

Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.

Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation

Procedia PDF Downloads 176
276 Local Procurement in Ghana's Hotel Industry: A Study of the Driving Forces, Perceptions and Procurement Patterns

Authors: Adu-Ampomah Yaw Junior

Abstract:

Local procurement has become one of the latest trends in the discourse of sustainable tourism due to the economic benefits it generates for tourist destinations in developing countries. Local procurement helps in creating jobs which consequently helps in alleviating poverty. However, there have been limited studies on local procurement patterns in developing countries. Research on hotel procurement practices has mainly emphasized the challenges that hoteliers face when procuring locally, leaving questions regarding their motivations to engage in local procurement unanswered. The institutional theory provides a suitable framework to better understand these motivations as it underlines the importance of individual cognitive perceptions on issues in shaping organizational response strategies. More specifically, the extent to which an issue is perceived to belong to the organization’s responsibility. Also the organizational actors’ belief of losses or gains resultant from acting or not acting on an issue (degree of importance). Furthermore the organizational actors’ belief of the probability of resolving an issue (degree of feasibility). These factors influence how an organization will act on this issue. Hence, this paper adopts an institutional perspective to examine local procurement patterns of food by hoteliers in Ghana. Qualitative interviews with 20 procurement managers about their procurement practices and motivations, as well as interviews with different stakeholders for data triangulation purposes, indicated that most hotels sourced their food from middlemen who imported most of their products. However, direct importation was more prevalent foreign owned hotels as opposed to locally owned ones. Notwithstanding, the importation and the usage of foreign foods as opposed to local ones can be explained by the lack of pressure from NGOs and trade associations on hotels to act responsibly. Though guests’ menu preferences were perceived as important to hoteliers business operations, western tourists demand foreign food primarily with the foreign owned hotels make it less important to procure local produce. Lastly hoteliers, particularly those in foreign owned ones, perceive local procurement to be less feasible, raising concerns about quality and variety of local produce. The paper outlines strategies to improve the perception and degree of local Firstly, there is the need for stakeholder engagement in order to make hoteliers feel responsible for acting on the issue.Again it is crucial for Ghana government to promote and encourage hotels to buy local produce. Also, the government has to also make funds and storage facilities available for farmers to impact on the quality and quantity of local produce. Moreover, Sites need to be secured for farmers to engage in sustained farming.Furthermore, there is the need for collaborations between various stakeholders to organize training programs for farmers. Notwithstanding hotels need to market local produce to their guests. Finally, the Ghana hotels association has to encourage hotels to indulge in local procurement.

Keywords: sustainable tourism, feasible, important, local procurement

Procedia PDF Downloads 195
275 User Experience Evaluation on the Usage of Commuter Line Train Ticket Vending Machine

Authors: Faishal Muhammad, Erlinda Muslim, Nadia Faradilla, Sayidul Fikri

Abstract:

To deal with the increase of mass transportation needs problem, PT. Kereta Commuter Jabodetabek (KCJ) implements Commuter Vending Machine (C-VIM) as the solution. For that background, C-VIM is implemented as a substitute to the conventional ticket windows with the purposes to make transaction process more efficient and to introduce self-service technology to the commuter line user. However, this implementation causing problems and long queues when the user is not accustomed to using the machine. The objective of this research is to evaluate user experience after using the commuter vending machine. The goal is to analyze the existing user experience problem and to achieve a better user experience design. The evaluation method is done by giving task scenario according to the features offered by the machine. The features are daily insured ticket sales, ticket refund, and multi-trip card top up. There 20 peoples that separated into two groups of respondents involved in this research, which consist of 5 males and 5 females each group. The experienced and inexperienced user to prove that there is a significant difference between both groups in the measurement. The user experience is measured by both quantitative and qualitative measurement. The quantitative measurement includes the user performance metrics such as task success, time on task, error, efficiency, and learnability. The qualitative measurement includes system usability scale questionnaire (SUS), questionnaire for user interface satisfaction (QUIS), and retrospective think aloud (RTA). Usability performance metrics shows that 4 out of 5 indicators are significantly different in both group. This shows that the inexperienced group is having a problem when using the C-VIM. Conventional ticket windows also show a better usability performance metrics compared to the C-VIM. From the data processing, the experienced group give the SUS score of 62 with the acceptability scale of 'marginal low', grade scale of “D”, and the adjective ratings of 'good' while the inexperienced group gives the SUS score of 51 with the acceptability scale of 'marginal low', grade scale of 'F', and the adjective ratings of 'ok'. This shows that both groups give a low score on the system usability scale. The QUIS score of the experienced group is 69,18 and the inexperienced group is 64,20. This shows the average QUIS score below 70 which indicate a problem with the user interface. RTA was done to obtain user experience issue when using C-VIM through interview protocols. The issue obtained then sorted using pareto concept and diagram. The solution of this research is interface redesign using activity relationship chart. This method resulted in a better interface with an average SUS score of 72,25, with the acceptable scale of 'acceptable', grade scale of 'B', and the adjective ratings of 'excellent'. From the time on task indicator of performance metrics also shows a significant better time by using the new interface design. Result in this study shows that C-VIM not yet have a good performance and user experience.

Keywords: activity relationship chart, commuter line vending machine, system usability scale, usability performance metrics, user experience evaluation

Procedia PDF Downloads 262
274 Organic Matter Distribution in Bazhenov Source Rock: Insights from Sequential Extraction and Molecular Geochemistry

Authors: Margarita S. Tikhonova, Alireza Baniasad, Anton G. Kalmykov, Georgy A. Kalmykov, Ralf Littke

Abstract:

There is a high complexity in the pore structure of organic-rich rocks caused by the combination of inter-particle porosity from inorganic mineral matter and ultrafine intra-particle porosity from both organic matter and clay minerals. Fluids are retained in that pore space, but there are major uncertainties in how and where the fluids are stored and to what extent they are accessible or trapped in 'closed' pores. A large degree of tortuosity may lead to fractionation of organic matter so that the lighter and flexible compounds would diffuse to the reservoir whereas more complicated compounds may be locked in place. Additionally, parts of hydrocarbons could be bound to solid organic matter –kerogen– and mineral matrix during expulsion and migration. Larger compounds can occupy thin channels so that clogging or oil and gas entrapment will occur. Sequential extraction of applying different solvents is a powerful tool to provide more information about the characteristics of trapped organic matter distribution. The Upper Jurassic – Lower Cretaceous Bazhenov shale is one of the most petroliferous source rock extended in West Siberia, Russia. Concerning the variable mineral composition, pore space distribution and thermal maturation, there are high uncertainties in distribution and composition of organic matter in this formation. In order to address this issue geological and geochemical properties of 30 samples including mineral composition (XRD and XRF), structure and texture (thin-section microscopy), organic matter contents, type and thermal maturity (Rock-Eval) as well as molecular composition (GC-FID and GC-MS) of different extracted materials during sequential extraction were considered. Sequential extraction was performed by a Soxhlet apparatus using different solvents, i.e., n-hexane, chloroform and ethanol-benzene (1:1 v:v) first on core plugs and later on pulverized materials. The results indicate that the studied samples are mainly composed of type II kerogen with TOC contents varied from 5 to 25%. The thermal maturity ranged from immature to late oil window. Whereas clay contents decreased with increasing maturity, the amount of silica increased in the studied samples. According to molecular geochemistry, stored hydrocarbons in open and closed pore space reveal different geochemical fingerprints. The results improve our understanding of hydrocarbon expulsion and migration in the organic-rich Bazhenov shale and therefore better estimation of hydrocarbon potential for this formation.

Keywords: Bazhenov formation, bitumen, molecular geochemistry, sequential extraction

Procedia PDF Downloads 170
273 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors

Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria

Abstract:

The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.

Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels

Procedia PDF Downloads 167
272 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 90
271 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 381
270 Polymer Matrices Based on Natural Compounds: Synthesis and Characterization

Authors: Sonia Kudlacik-Kramarczyk, Anna Drabczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec

Abstract:

Introduction: In the preparation of polymer materials, compounds of natural origin are currently gaining more and more interest. This is particularly noticeable in the case of synthesis of materials considered for biomedical use. Then, selected material has to meet many requirements. It should be characterized by non-toxicity, biodegradability and biocompatibility. Therefore special attention is directed to substances such as polysaccharides, proteins or substances that are the basic building components of proteins, i.e. amino acids. These compounds may be crosslinked with other reagents that leads to the preparation of polymer matrices. Such amino acids as e.g. cysteine or histidine. On the other hand, previously mentioned requirements may be met by polymers obtained as a result of biosynthesis, e.g. polyhydroxybutyrate. This polymer belongs to the group of aliphatic polyesters that is synthesized by microorganisms (selected strain of bacteria) under specific conditions. It is possible to modify matrices based on given polymer with substances of various origin. Such a modification may result in the change of their properties or/and in providing the material with new features desirable in viewpoint of specific application. Described materials are synthesized using UV radiation. Process of photopolymerization is fast, waste-free and enables to obtain final products with favorable properties. Methodology: Polymer matrices have been prepared by means of photopolymerization. First step involved the preparation of solutions of particular reagents and mixing them in the appropriate ratio. Next, crosslinking agent and photoinitiator have been added to the reaction mixture and the whole was poured into the Petri dish and treated with UV radiation. After the synthesis, polymer samples were dried at room temperature and subjected to the numerous analyses aimed at the determining their physicochemical properties. Firstly, sorption properties of obtained polymer matrices have been determined. Next, mechanical properties have been characterized, i.e. tensile strength. The ability to deformation under applied stress of all prepared polymer matrices has been checked. Such a property is important in viewpoint of the application of analyzed materials e.g. as wound dressings. Wound dressings have to be elastic because depending on the location of the wound and its mobility, such a dressing has to adhere properly to the wound. Furthermore, considering the use of the materials for biomedical purposes it is essential to determine its behavior in environments simulating these ones occurring in human body. Therefore incubation studies using selected liquids have also been conducted. Conclusions: As a result of photopolymerization process, polymer matrices based on natural compounds have been prepared. These exhibited favorable mechanical properties and swelling ability. Moreover, biocompatibility in relation to simulated body fluids has been stated. Therefore it can be concluded that analyzed polymer matrices constitute an interesting materials that may be considered for biomedical use and may be subjected to the further more advanced analyses using specific cell lines.

Keywords: photopolymerization, polymer matrices, simulated body fluids, swelling properties

Procedia PDF Downloads 127
269 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT

Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi

Abstract:

Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.

Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer

Procedia PDF Downloads 84
268 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure

Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu

Abstract:

In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.

Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency

Procedia PDF Downloads 177
267 Material Handling Equipment Selection Using Fuzzy AHP Approach

Authors: Priyanka Verma, Vijaya Dixit, Rishabh Bajpai

Abstract:

This research paper is aimed at selecting appropriate material handling equipment among the given choices so that the automation level in material handling can be enhanced. This work is a practical case scenario of material handling systems in consumer electronic appliances manufacturing organization. The choices of material handling equipment among which the decision has to be made are Automated Guided Vehicle’s (AGV), Autonomous Mobile Robots (AMR), Overhead Conveyer’s (OC) and Battery Operated Trucks/Vehicle’s (BOT). There is a need of attaining a certain level of automation in order to reduce human interventions in the organization. This requirement of achieving certain degree of automation can be attained by material handling equipment’s mentioned above. The main motive for selecting above equipment’s for study was solely based on corporate financial strategy of investment and return obtained through that investment made in stipulated time framework. Since the low cost automation with respect to material handling devices has to be achieved hence these equipment’s were selected. Investment to be done on each unit of this equipment is less than 20 lakh rupees (INR) and the recovery period is less than that of five years. Fuzzy analytic hierarchic process (FAHP) is applied here for selecting equipment where the four choices are evaluated on basis of four major criteria’s and 13 sub criteria’s, and are prioritized on the basis of weight obtained. The FAHP used here make use of triangular fuzzy numbers (TFN). The inability of the traditional AHP in order to deal with the subjectiveness and impreciseness in the pair-wise comparison process has been improved in the FAHP. The range of values for general rating purposes for all decision making parameters is kept between 0 and 1 on the basis of expert opinions captured on shop floor. These experts were familiar with operating environment and shop floor activity control. Instead of generating exact value the FAHP generates the ranges of values to accommodate the uncertainty in decision-making process. The four major criteria’s selected for the evaluation of choices of material handling equipment’s available are materials, technical capabilities, cost and other features. The thirteen sub criteria’s listed under these following four major criteria’s are weighing capacity, load per hour, material compatibility, capital cost, operating cost and maintenance cost, speed, distance moved, space required, frequency of trips, control required, safety and reliability issues. The key finding shows that among the four major criteria selected, cost is emerged as the most important criteria and is one of the key decision making aspect on the basis of which material equipment selection is based on. While further evaluating the choices of equipment available for each sub criteria it is found that AGV scores the highest weight in most of the sub-criteria’s. On carrying out complete analysis the research shows that AGV is the best material handling equipment suiting all decision criteria’s selected in FAHP and therefore it is beneficial for the organization to carry out automated material handling in the facility using AGV’s.

Keywords: fuzzy analytic hierarchy process (FAHP), material handling equipment, subjectiveness, triangular fuzzy number (TFN)

Procedia PDF Downloads 434
266 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador

Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria

Abstract:

There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.

Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables

Procedia PDF Downloads 432
265 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 263
264 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes

Authors: Mohsen Hababalahi, Morteza Bastami

Abstract:

Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.

Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method

Procedia PDF Downloads 513
263 Hiveopolis - Honey Harvester System

Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios

Abstract:

Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.

Keywords: honey harvesting, honeybee, hiveopolis, nitinol

Procedia PDF Downloads 108
262 Enhancing Precision in Abdominal External Beam Radiation Therapy: Exhale Breath Hold Technique for Respiratory Motion Management

Authors: Stephanie P. Nigro

Abstract:

The Exhale Breath Hold (EBH) technique presents a promising approach to enhance the precision and efficacy of External Beam Radiation Therapy (EBRT) for abdominal tumours, which include liver, pancreas, kidney, and adrenal glands. These tumours are challenging to treat due to their proximity to organs at risk (OARs) and the significant motion induced by respiration and physiological variations, such as stomach filling. Respiratory motion can cause up to 40mm of displacement in abdominal organs, complicating accurate targeting. While current practices like limiting fasting help reduce motion related to digestive processes, they do not address respiratory motion. 4DCT scans are used to assess this motion, but they require extensive workflow time and expose patients to higher doses of radiation. The EBH technique, which involves holding the breath in an exhale with no air in the lungs, stabilizes internal organ motion, thereby reducing respiratory-induced motion. The primary benefit of EBH is the reduction in treatment volume sizes, specifically the Internal Target Volume (ITV) and Planning Target Volume (PTV), as demonstrated by smaller ITVs when gated in EBH. This reduction also improves the quality of 3D Cone Beam CT (CBCT) images by minimizing respiratory artifacts, facilitating soft tissue matching akin to stereotactic treatments. Patients suitable for EBH must meet criteria including the ability to hold their breath for at least 15 seconds and maintain a consistent breathing pattern. For those who do not qualify, the traditional 4DCT protocol will be used. The implementation involves an EBH planning scan and additional short EBH scans to ensure reproducibility and assist in contouring and volume expansions, with a Free Breathing (FB) scan used for setup purposes. Treatment planning on EBH scans leads to smaller PTVs, though intrafractional and interfractional breath hold variations must be accounted for in margins. The treatment decision process includes performing CBCT in EBH intervals, with careful matching and adjustment based on soft tissue and fiducial markers. Initial studies at two sites will evaluate the necessity of multiple CBCTs, assessing shifts and the benefits of initial versus mid-treatment CBCT. Considerations for successful implementation include thorough patient coaching, staff training, and verification of breath holds, despite potential disadvantages such as longer treatment times and patient exhaustion. Overall, the EBH technique offers significant improvements in the accuracy and quality of abdominal EBRT, paving the way for more effective and safer treatments for patients.

Keywords: abdominal cancers, exhale breath hold, radiation therapy, respiratory motion

Procedia PDF Downloads 26