Search results for: case definition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12283

Search results for: case definition

1303 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution

Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques

Abstract:

The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.

Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)

Procedia PDF Downloads 298
1302 Effect of Manure Treatment on Furrow Erosion: A Case Study of Sagawika Irrigation Scheme in Kasungu, Malawi

Authors: Abel Mahowe

Abstract:

Furrow erosion is the major problem menacing sustainability of irrigation in Malawi and polluting water bodies resulting in death of many aquatic animals. Many rivers in Malawi are drying due to some poor practices that are being practiced around these water bodies, furrow erosion is one of the cause of sedimentation in these rivers although it has gradual effect on deteriorating of these rivers hence neglected, but has got long term disastrous effect on water bodies. Many aquatic animals also suffer when these sediments are taken into these water bodies. An assessment of effect of manure treatment on furrow erosion was carried out in Sagawika irrigation scheme located in Kasungu District north part of Malawi. The soil on the field was clay loam and had just been tilled. The average furrow slope of 0.2% and was divided into two blocks, A and B. Each block had 20V-shaped furrow having a length of 10 m. Three different manure were used to construct these furrows by mixing it with soil which was moderately moist and 5 furrows from each block were constructed without manure. In each block 5furrow were made using a specific type of manure, and one set of five furrows in each block was made without manure treatment. The types of manure that were used were goat manure, pig manure, and manure from crop residuals. The manure application late was 5 kg/m. The furrow was constructed at a spacing of 0.6 m. Tomato was planted in the two blocks at spacing of 0.15 m between rows and 0.15 m between planting stations. Irrigation water was led from feeder canal into the irrigation furrows using siphons. The siphons discharge into each furrow was set at 1.86 L/S. The ¾ rule was used to determine the cut-off time for the irrigation cycles in order to reduce the run-off at the tail end. During each irrigation cycle, samples of the runoff water were collected at one-minute intervals and analyzed for total sediment concentration for use in estimating the total soil sediment loss. The results of the study have shown that a significant amount of soil is lost in soils without many organic matters, there was a low level of erosion in furrows that were constructed using manure treatment within the blocks. In addition, the results have shown that manure also differs in their ability to control erosion since pig manure proved to have greater abilities in binding the soil together than other manure since they were reduction in the amount of sediments at the tail end of furrows constructed by this type of manure. The results prove that manure contains organic matters which helps soil particles to bind together hence resisting the erosive force of water. The use of manure when constructing furrows in soil with less organic matter can highly reduce erosion hence reducing also pollution of water bodies and improve the conditions of aquatic animals.

Keywords: aquatic, erosion, furrow, soil

Procedia PDF Downloads 289
1301 The Development of Cultural Routes: The Case of Greece

Authors: Elissavet Kosta

Abstract:

Introduction: In this research, we will propose the methodology, which is required for the planning of the cultural route in order to prepare substantiated proposals for the development and planning of cultural routes in Greece in the near future. Our research has started at 2016. Methodology in our research: Α combination of primary and secondary research will be used as project methodology. Furthermore, this study aims to follow a multidisciplinary approach, using dimensions of qualitative and quantitative data analysis models. Regarding the documentation of the theoretical part of the project, the method of secondary research will be mainly used, yet in combination with bibliographic sources. However, the data collection regarding the research topic will be conducted exclusively through primary research (questionnaires and interviews). Cultural Routes: The cultural route is defined as a brand name touristic product, that is a product of cultural tourism, which is shaped according to a specific connecting element. Given its potential, the cultural route is an important ‘tool’ for the management and development of cultural heritage. Currently, a constant development concerning the cultural routes is observed in an international level during the last decades, as it is widely accepted that cultural tourism has an important role in the world touristic industry. Cultural Routes in Greece: Especially for Greece, we believe, actions have not been taken to the systematic development of the cultural routes yet. The cultural routes that include Greece and have been design in a world scale as well as the cultural routes, which have been design in Greek ground up to this moment are initiations of the Council of Europe, World Tourism Organization UNWTO and ‘Diazoma’ association. Regarding the study of cultural routes in Greece as a multidimensional concept, the following concerns have arisen: Firstly, we are concerned about the general impact of cultural routes at local and national level and specifically in the economic sector. Moreover, we deal with the concerns regarding the natural environment and we delve into the educational aspect of cultural routes in Greece. In addition, the audience we aim at is both specific and broad and we put forward the institutional framework of the study. Finally, we conduct the development and planning of new cultural routes, having in mind the museums as both the starting and ending point of a route. Conclusion: The contribution of our work is twofold and lies firstly on the fact that we attempt to create cultural routes in Greece and secondly on the fact that an interdisciplinary approach is engaged towards realizing our study objective. In particular, our aim is to take advantage of all the ways in which the promotion of a cultural route can have a positive influence on the way of life of society. As a result, we intend to analyze how a cultural route can turn into a well-organized activity that can be used as social intervention to develop tourism, strengthen the economy and improve access to cultural goods in Greece during the economic crisis.

Keywords: cultural heritage, cultural routes, cultural tourism, Greece

Procedia PDF Downloads 238
1300 A Novel Harmonic Compensation Algorithm for High Speed Drives

Authors: Lakdar Sadi-Haddad

Abstract:

The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.

Keywords: active harmonic compensation, eddy current losses, high speed machine

Procedia PDF Downloads 396
1299 Finite Element Molecular Modeling: A Structural Method for Large Deformations

Authors: A. Rezaei, M. Huisman, W. Van Paepegem

Abstract:

Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.

Keywords: finite element, large deformation, molecular mechanics, structural method

Procedia PDF Downloads 153
1298 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR

Authors: Ivana Scidà, Francesco Alotto, Anna Osello

Abstract:

Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.

Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality

Procedia PDF Downloads 132
1297 Examining Terrorism through a Constructivist Framework: Case Study of the Islamic State

Authors: Shivani Yadav

Abstract:

The Study of terrorism lends itself to the constructivist framework as constructivism focuses on the importance of ideas and norms in shaping interests and identities. Constructivism is pertinent to understand the phenomenon of a terrorist organization like the Islamic State (IS), which opportunistically utilizes radical ideas and norms to shape its ‘politics of identity’. This ‘identity’, which is at the helm of preferences and interests of actors, in turn, shapes actions. The paper argues that an effective counter-terrorism policy must recognize the importance of ideas in order to counter the threat arising from acts of radicalism and terrorism. Traditional theories of international relations, with an emphasis on state-centric security problematic, exhibit several limitations and problems in interpreting the phenomena of terrorism. With the changing global order, these theories have failed to adapt to the changing dimensions of terrorism, especially ‘newer’ actors like the Islamic State (IS). The paper observes that IS distinguishes itself from other terrorist organizations in the way that it recruits and spreads its propaganda. Not only are its methods different, but also its tools (like social media) are new. Traditionally, too, force alone has rarely been sufficient to counter terrorism, but it seems especially impossible to completely root out an organization like IS. Time is ripe to change the discourse around terrorism and counter-terrorism strategies. The counter-terrorism measures adopted by states, which primarily focus on mitigating threats to the national security of the state, are preoccupied with statist objectives of the continuance of state institutions and maintenance of order. This limitation prevents these theories from addressing the questions of justice and the ‘human’ aspects of ideas and identity. These counter-terrorism strategies adopt a problem-solving approach that attempts to treat the symptoms without diagnosing the disease. Hence, these restrictive strategies fail to look beyond calculated retaliation against violent actions in order to address the underlying causes of discontent pertaining to ‘why’ actors turn violent in the first place. What traditional theories also overlook is that overt acts of violence may have several causal factors behind them, some of which are rooted in the structural state system. Exploring these root causes through the constructivist framework helps to decipher the process of ‘construction of terror’ and to move beyond the ‘what’ in theorization in order to describe ‘why’, ‘how’ and ‘when’ terrorism occurs. Study of terrorism would much benefit from a constructivist analysis in order to explore non-military options while countering the ideology propagated by the IS.

Keywords: constructivism, counter terrorism, Islamic State, politics of identity

Procedia PDF Downloads 191
1296 Prophylactic Effect of Dietary Garlic (Allium sativum) Inclusion in Feed of Commercial Broilers with Coccidiosis Raised at the Experimental Animal Unit of the Department of Veterinary Medicine, University of Ibadan, Oyo State, Nigeria

Authors: Ogunlesi Olufunso, John Ogunsola, Omolade Oladele, Benjamin Emikpe

Abstract:

Context: Coccidiosis is a parasitic disease that affects poultry production, leading to economic losses. Garlic is known for medicinal properties and has been used as a natural remedy for various diseases. This study aims to investigate the prophylactic effect of garlic inclusion in the feed of commercial broilers with coccidiosis. Research Aim: The aim of this study is to determine the possible effect of garlic meal inclusion in poultry feed on the body weight gain of commercial broilers and to investigate it's therapeutic effect on broilers with coccidiosis. Methodology: The study conducted a case-control study for eight weeks with One hundred Arbor acre commercial broilers separated into five (5) groups from day-old, where 6,000 Eimeria oocysts were orally inoculated into each broiler in the different groups. Feed intake, body weight gain, feed conversion ratio, oocyt shedding rate, histopathology and erythrocyte indices were assessed. Findings: The inclusion of garlic meal in the broilers' diet resulted in an improved feed conversion ratio, decreased oocyst counts, reduced diarrhoeic fecal spots, decreased susceptibility to coccidial infection, and increased packed cell volume (PCV). Theoretical Importance: This study contributes to the understanding of the prophylactic effect of garlic supplementation, including its antiparasitic properties on commercial broilers with coccidiosis. It highlights the potential use of non-conventional feed additives or ayurvedic herb and spices in the treatment of poultry diseases. Data Collection and Analysis Procedures: The study collected data on feed intake, body weight gain, oocyst shedding rate, histopathological observations, and erythrocyte indices. Data were analyzed using Analysis of Variance and Duncan's Multiple range Test. Questions Addressed: The study addressed the possible effect of garlic meal inclusion in poultry feed on the body weight gain of broilers and its therapeutic effect on broilers with coccidiosis. Conclusion: The study concludes that garlic inclusion in the feed of broilers has a prophylactic effect, including antiparasitic properties, resulting in improved feed conversion ratio, reduced oocyst counts and increased PCV.

Keywords: broilers, eimeria spp, garlic, Ibadan

Procedia PDF Downloads 89
1295 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 550
1294 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 125
1293 Doped TiO2 Thin Films Microstructural and Electrical Properties

Authors: Mantas Sriubas, Kristina Bockute, Darius Virbukas, Giedrius Laukaitis

Abstract:

In this work, the doped TiO2 (dopants – Ca, Mg) was investigated. The comparison between the physical vapour deposition methods as electron beam vapour deposition and magnetron sputtering was performed and the structural and electrical properties of the formed thin films were investigated. Thin films were deposited on different type of substrates: SiO2, Alloy 600 (Fe-Ni-Cr) and Al2O3 substrates. The structural properties were investigated using Ambios XP-200 profilometer, scanning electron microscope (SEM) Hitachi S-3400N, X-ray energy-dispersive spectroscope (EDS) Quad 5040 (Bruker AXS Microanalysis GmbH), X-ray diffractometer (XRD) D8 Discover (Bruker AXS GmbH) with glancing angles focusing geometry in a 20 – 70° range using the Cu Kα1 λ = 0.1540562 nm radiation). The impedance spectroscopy measurements were performed using Probostat® (NorECs AS) measurement cell in the frequency range from 10-1-106 Hz under reducing and oxidizing conditions in temperature range of 200 °C to 1200 °C. The investigation of the e-beam deposited Ca and Mg doped-TiO2 thin films shows that the thin films are dense without any visible pores and cavities and the thin films grow in zone T according Barna-Adamik SZM. Substrate temperature was kept 600 °C during the deposition and Ts/Tm ≈ 0.32 (substrate temperature (Ts) and coating material melting temperature (Tm)). The surface diffusion is high however, the grain boundary migration is strongly limited at this temperature. This means that structure is inhomogeneous and the columnar structure is mostly visible in the upper part of the films. According to XRD, the increasing of the Ca dopants’ concentration increases the crystallinity of the formed thin films and the crystallites size increase linearly and Ca dopants act as prohibitors. Thin films are comprised of anatase TiO2 phase with an exception of 2 % Ca doped TiO2, where a small peak of Ca arise. In the case of Mg doped-TiO2 the intensities of the XRD peaks decreases with increasing Mg molar concentration. It means that there are less diffraction planes of the particular orientation in thin films with higher impurities concentration. Thus, the crystallinity decreases with increasing Mg concentration and Mg dopants act as inhibitors. The impedance measurements show that the dopants changed the conductivity of the formed thin films. The conductivity varies from 10-3 S/cm to 10-4 S/cm at 800 °C under wet reducing conditions. The microstructure of the magnetron sputtered thin TiO2 films is different comparing to the thin films deposited using e-beam deposition therefore influencing other structural and electrical properties.

Keywords: electrical properties, electron beam deposition, magnetron sputtering, microstructure, titanium dioxide

Procedia PDF Downloads 298
1292 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute

Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane

Abstract:

Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.

Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis

Procedia PDF Downloads 124
1291 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 119
1290 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 177
1289 The Existential in a Practical Phenomenology Research: A Study on the Political Participation of Young Women

Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso

Abstract:

This communication presents proposed questions about the existential in research on the political participation of young women. The study follows a qualitative methodology, in particular, the applied hermeneutic phenomenological (AHP) method, and the general objective of the research is to give an account of the experience of political participation as a young woman. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. The ultimate objective of HP is to gain access to the meaning structures of lived experience by appropriating them, clarifying them, and reflectively making them explicit. Human experiences are always lived through existential: fundamental themes that are useful in exploring meaningful aspects of our life worlds. Everyone experiences the world through the existential of lived relationships, the lived body, lived space, lived time, and lived things. The phenomenological research, then, also tacitly asks about the existential. Existentials are universal themes useful for exploring significant aspects of our life world and of the particular phenomena under study. Four main existentials prove especially helpful as guides for reflection in the research process: relationship, body, space, and time. For example, in our case, we may ask ourselves how can the existentials of relationship, body, space, and time guide us in exploring the structures of meaning in the lived experience of political participation as a woman and a young person. The study is still not finished, as we are currently conducting phenomenological thematic analysis on the collected stories of lived experience. Yet, we have already identified some fragments of texts that show the existential in their experiences, which we will transcribe below. 1) Relationality - The experienced I-Other. It regards how relationships are experienced in our narratives about political participation as young women. One example would be: “As we had known each other for a long time, we understood each other with our eyes; we were all a little bit on the same page, thinking the same thing.” 2) Corporeality - The lived body. It regards how the lived body is experienced in activities of political participation as a young woman. One example would be: “My blood was boiling, but it was not the time to throw anything in their face, we had to look for solutions.”; “I had a lump in my throat and I wanted to cry.”. 3) Spatiality - The lived space. It regards how one experiences the lived space in political participation activities as a young woman. One example would be: “And the feeling I got when I saw [it] it's like watching everybody going into a mousetrap.” 4) Temporality - Lived time. It regards how one experiences the lived time in political participation activities as a young woman. One example would be: “Then, there were also meetings that went on forever…”

Keywords: applied hermeneutic phenomenology, existentials, hermeneutics, phenomenology, political participation

Procedia PDF Downloads 96
1288 The Use of Gender-Fair Language in CS National Exams

Authors: Moshe Leiba, Doron Zohar

Abstract:

Computer Science (CS) and programming is still considered a boy’s club and is a male-dominated profession. This is also the case in high schools and higher education. In Israel, not different from the rest of the world, there are less than 35% of female students in CS studies that take the matriculation exams. The Israeli matriculation exams are written in a masculine form language. Gender-fair language (GFL) aims at reducing gender stereotyping and discrimination. There are several strategies that can be employed to make languages gender-fair and to treat women and men symmetrically (especially in languages with grammatical gender, among them neutralization and using the plural form. This research aims at exploring computer science teachers’ beliefs regarding the use of gender-fair language in exams. An exploratory quantitative research methodology was employed to collect the data. A questionnaire was administered to 353 computer science teachers. 58% female and 42% male. 86% are teaching for at least 3 years, with 59% of them have a teaching experience of 7 years. 71% of the teachers teach in high school, and 82% of them are preparing students for the matriculation exam in computer science. The questionnaire contained 2 matriculation exam questions from previous years and open-ended questions. Teachers were asked which form they think is more suited: (a) the existing form (mescaline), (b) using both gender full forms (e.g., he/she), (c) using both gender short forms, (d) plural form, (e) natural form, and (f) female form. 84% of the teachers recognized the need to change the existing mescaline form in the matriculation exams. About 50% of them thought that using the plural form was the best-suited option. When examining the teachers who are pro-change and those who are against, no gender differences or teaching experience were found. The teachers who are pro gender-fair language justified it as making it more personal and motivating for the female students. Those who thought that the mescaline form should remain argued that the female students do not complain and the change in form will not influence or affect the female students to choose to study computer science. Some even argued that the change will not affect the students but can only improve their sense of identity or feeling toward the profession (which seems like a misconception). This research suggests that the teachers are pro-change and believe that re-formulating the matriculation exams is the right step towards encouraging more female students to choose to study computer science as their major study track and to bridge the gap for gender equality. This should indicate a bottom-up approach, as not long after this research was conducted, the Israeli ministry of education decided to change the matriculation exams to gender-fair language using the plural form. In the coming years, with the transition to web-based examination, it is suggested to use personalization and adjust the language form in accordance with the student's gender.

Keywords: compter science, gender-fair language, teachers, national exams

Procedia PDF Downloads 113
1287 Juvenile Fish Associated with Pondweed and Charophyte Habitat: A Case Study Using Upgraded Pop-up Net in the Estuarine Part of the Curonian Lagoon

Authors: M. Bučas, A. Skersonas, E. Ivanauskas, J. Lesutienė, N. Nika, G. Srėbalienė, E. Tiškus, J. Gintauskas, A.Šaškov, G. Martin

Abstract:

Submerged vegetation enhances heterogeneity of sublittoral habitats; therefore, macrophyte stands are essential elements of aquatic ecosystems to maintain a diverse fish fauna. Fish-habitat relations have been extensively studied in streams and coastal waters, but in lakes and estuaries are still underestimated. The aim of this study is to assess temporal (diurnal and seasonal) patterns of fish juvenile assemblages associated with common submerged macrophyte habitats, which have significantly spread during the recent decade in the upper littoral part of the Curonian Lagoon. The assessment was performed by means of an upgraded pop-up net approach resulting in much precise sampling versus other techniques. The optimal number of samples (i.e., pop-up nets) required to cover>80% of the total number of fish species depended on the time of the day in both study sites: at least 7and 9 nets in the evening (18-24 pm) in the Southern and Northern study sites, respectively. In total, 14 fish species were recorded, where perch and roach dominated (respectively 48% and 24%). From multivariate analysis, water salinity and seasonality (temperature or sampling month) were primary factors determining fish assemblage composition. The southern littoral area, less affected by brackish water conditions, hosted a higher number of species (13) than in the Northern site (8). In the latter site, brackish water tolerant species (three-spined and nine-spined sticklebacks, spiny loach, roach, and round goby) were more abundant than in the Southern site. Perch and ruffe dominated in the Southern site. Spiny loach and nine-spined stickleback were more frequent in September, while ruffe, perch, and roach occurred more in July. The diel dynamics of the common species such as perch, roach, and ruffe followed the general pattern, but it was species specific and depended on the study site, habitat, and month. The species composition between macrophyte habitats did not significantly differ; however, it differed from the results obtained in 2005 at both study sites indicating the importance of expanded charophyte stands during the last decade in the littoral zone.

Keywords: diel dynamics, charophytes, pondweeds, herbivorous and benthivorous fishes, littoral, nursery habitat, shelter

Procedia PDF Downloads 190
1286 Exploring the Potential of Bio-Inspired Lattice Structures for Dynamic Applications in Design

Authors: Axel Thallemer, Aleksandar Kostadinov, Abel Fam, Alex Teo

Abstract:

For centuries, the forming processes in nature served as a source of inspiration for both architects and designers. It seems as most human artifacts are based on ideas which stem from the observation of the biological world and its principles of growth. As a fact, in the cultural history of Homo faber, materials have been mostly used in their solid state: From hand axe to computer mouse, the principle of employing matter has not changed ever since the first creation. In the scope of history only recently and by the help of additive-generative fabrication processes through Computer Aided Design (CAD), designers were enabled to deconstruct solid artifacts into an outer skin and an internal lattice structure. The intention behind this approach is to create a new topology which reduces resources and integrates functions into an additively manufactured component. However, looking at the currently employed lattice structures, it is very clear that those lattice structure geometries have not been thoroughly designed, but rather taken out of basic-geometry libraries which are usually provided by the CAD. In the here presented study, a group of 20 industrial design students created new and unique lattice structures using natural paragons as their models. The selected natural models comprise both the animate and inanimate world, with examples ranging from the spiraling of narwhal tusks, off-shooting of mangrove roots, minimal surfaces of soap bubbles, up to the rhythmical arrangement of molecular geometry, like in the case of SiOC (Carbon-Rich Silicon Oxicarbide). This ideation process leads to a design of a geometric cell, which served as a basic module for the lattice structure, whereby the cell was created in visual analogy to its respective natural model. The spatial lattices were fabricated additively in mostly [X]3 by [Y]3 by [Z]3 units’ volumes using selective powder bed melting in polyamide with (z-axis) 50 mm and 100 µm resolution and subdued to mechanical testing of their elastic zone in a biomedical laboratory. The results demonstrate that additively manufactured lattice structures can acquire different properties when they are designed in analogy to natural models. Several of the lattices displayed the ability to store and return kinetic energy, while others revealed a structural failure which can be exploited for purposes where a controlled collapse of a structure is required. This discovery allows for various new applications of functional lattice structures within industrially created objects.

Keywords: bio-inspired, biomimetic, lattice structures, additive manufacturing

Procedia PDF Downloads 151
1285 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 365
1284 Quality and Shelf life of UHT Milk Produced in Tripoli, Libya

Authors: Faozia A. S. Abuhtana, Yahia S. Abujnah, Said O. Gnann

Abstract:

Ultra High Temperature (UHT) processed milk is widely distributed and preferred in numerous countries all over the world due its relatively high quality and long shelf life. Because of the notable high consumption rate of UHT in Libya in addition to negligible studies related to such product on the local level, this study was designed to assess the shelf life of locally produced as well as imported reconstituted sterilized whole milk samples marketed in Tripoli, Libya . Four locally produced vs. three imported brands were used in this study. All samples were stored at room temperature (25± 2C ) for 8 month long period, and subjected to physical, chemical, microbiological and sensory tests. These tests included : measurement of pH, specific gravity, percent acidity, and determination of fat, protein and melamine content. Microbiological tests included total aerobic count, total psychotropic bacteria, total spore forming bacteria and total coliform counts. Results indicated no detection of microbial growth of any type during the study period, in addition to no detection of melamine in all samples. On the other hand, a gradual decline in pH accompanied with gradual increase in % acidity of both locally produced and imported samples was observed. Such changes in both pH and % acidity reached their lowest and highest values respectively during the 24th week of storage. For instance pH values were (6.40, 6.55, 6.55, 6.15) and (6.30, 6.50, 6.20) for local and imported brands respectively. On the other hand, % acidity reached (0.185, 0181, 0170, 0183) and (0180, 0.180, 0.171) at the 24th week for local and imported brands respectively. Similar pattern of decline was also observed in specific gravity, fat and protein content in some local and imported samples especially at later stages of the study. In both cases, some of the recorded pH values, % acidity, sp. gravity and fat content were in violation of the accepted limits set by Libyan standard no. 356 for sterilized milk. Such changes in pH, % acidity and other UHT sterilized milk constituents during storage were coincided with a gradual decrease in the degree of acceptance of the stored milk samples of both types as shown by sensory scores recorded by the panelists. In either case degree of acceptance was significantly low at late stages of storage and most milk samples became relatively unacceptable after the 18th and 20th week for both untrained and trained panelists respectively.

Keywords: UHT milk, shelf life, quality, gravity, bacteria

Procedia PDF Downloads 340
1283 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization

Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade

Abstract:

The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.

Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy

Procedia PDF Downloads 120
1282 Building Resilient Communities: The Traumatic Effect of Wildfire on Mati, Greece

Authors: K. Vallianou, T. Alexopoulos, V. Plaka, M. K. Seleventi, V. Skanavis, C. Skanavis

Abstract:

The present research addresses the role of place attachment and emotions in community resiliency and recovery within the context of a disaster. Natural disasters represent a disruption in the normal functioning of a community, leading to a general feeling of disorientation. This study draws on the trauma caused by a natural hazard such as a forest fire. The changes of the sense of togetherness are being assessed. Finally this research determines how the place attachment of the inhabitants was affected during the reorientation process of the community. The case study area is Mati, a small coastal town in eastern Attica, Greece. The fire broke out on July 23rd, 2018. A quantitative research was conducted through questionnaires via phone interviews, one year after the disaster, to address community resiliency in the long-run. The sample was composed of 159 participants from the rural community of Mati plus 120 coming from Skyros Island that was used as a control group. Inhabitants were prompted to answer items gauging their emotions related to the event, group identification and emotional significance of their community, and place attachment before and a year after the fire took place. Importantly, the community recovery and reorientation were examined within the context of a relative absence of government backing and official support. Emotions related to the event were aggregated into 4 clusters related to: activation/vigilance, distress/disorientation, indignation, and helplessness. The findings revealed a decrease in the level of place attachment in the impacted area of Mati as compared to the control group of Skyros Island. Importantly, initial distress caused by the fire prompted the residents to identify more with their community and to report more positive feelings toward their community. Moreover, a mediation analysis indicated that the positive effect of community cohesion on place attachment one year after the disaster was mediated by the positive feelings toward the community. Finally, place attachment contributes to enhanced optimism and a more positive perspective concerning Mati’s future prospects. Despite an insufficient state support to this affected area, the findings suggest an important role of emotions and place attachment during the process of recovery. Implications concerning the role of emotions and social dynamics in meshing place attachment during the disaster recovery process as well as community resiliency are discussed.

Keywords: community resilience, natural disasters, place attachment, wildfire

Procedia PDF Downloads 105
1281 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 537
1280 The Principal-Agent Model with Moral Hazard in the Brazilian Innovation System: The Case of 'Lei do Bem'

Authors: Felippe Clemente, Evaldo Henrique da Silva

Abstract:

The need to adopt some type of industrial policy and innovation in Brazil is a recurring theme in the discussion of public interventions aimed at boosting economic growth. For many years, the country has adopted various policies to change its productive structure in order to increase the participation of sectors that would have the greatest potential to generate innovation and economic growth. Only in the 2000s, tax incentives as a policy to support industrial and technological innovation are being adopted in Brazil as a phenomenon associated with rates of productivity growth and economic development. In this context, in late 2004 and 2005, Brazil reformulated its institutional apparatus for innovation in order to approach the OECD conventions and the Frascati Manual. The Innovation Law (2004) and the 'Lei do Bem' (2005) reduced some institutional barriers to innovation, provided incentives for university-business cooperation, and modified access to tax incentives for innovation. Chapter III of the 'Lei do Bem' (no. 11,196/05) is currently the most comprehensive fiscal incentive to stimulate innovation. It complies with the requirements, which stipulates that the Union should encourage innovation in the company or industry by granting tax incentives. With its introduction, the bureaucratic procedure was simplified by not requiring pre-approval of projects or participation in bidding documents. However, preliminary analysis suggests that this instrument has not yet been able to stimulate the sector diversification of these investments in Brazil, since its benefits are mostly captured by sectors that already developed this activity, thus showing problems with moral hazard. It is necessary, then, to analyze the 'Lei do Bem' to know if there is indeed the need for some change, investigating what changes should be implanted in the Brazilian innovation policy. This work, therefore, shows itself as a first effort to analyze a current national problem, evaluating the effectiveness of the 'Lei do Bem' and suggesting public policies that help and direct the State to the elaboration of legislative laws capable of encouraging agents to follow what they describes. As a preliminary result, it is known that 130 firms used fiscal incentives for innovation in 2006, 320 in 2007 and 552 in 2008. Although this number is on the rise, it is still small, if it is considered that there are around 6 thousand firms that perform Research and Development (R&D) activities in Brazil. Moreover, another obstacle to the 'Lei do Bem' is the percentages of tax incentives provided to companies. These percentages reveal a significant sectoral correlation between R&D expenditures of large companies and R&D expenses of companies that accessed the 'Lei do Bem', reaching a correlation of 95.8% in 2008. With these results, it becomes relevant to investigate the law's ability to stimulate private investments in R&D.

Keywords: brazilian innovation system, moral hazard, R&D, Lei do Bem

Procedia PDF Downloads 338
1279 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 35
1278 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa

Authors: Dimakatso Machetele, Kowiyou Yessoufou

Abstract:

Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.

Keywords: road accidents, South Africa, statistical modelling, trends

Procedia PDF Downloads 162
1277 Disconnect between Water, Sanitation and Hygiene Related Behaviours of Children in School and Family

Authors: Rehan Mohammad

Abstract:

Background: Improved Water, Sanitation and Hygiene (WASH) practices in schools ensure children’s health, well-being and cognitive performance. In India under various WASH interventions in schools, teachers, and other staff make every possible effort to educate children about personal hygiene, sanitation practices and harms of open defecation. However, once children get back to their families, they see other practicing inappropriate WASH behaviors, and they consequently start following them. This show disconnect between school behavior and family behavior, which needs to be bridged to achieve desired WASH outcomes. Aims and Objectives: The aim of this study is to assess the factors causing disconnect of WASH-related behaviors between school and the family of children. It also suggests behavior change interventions to bridge the gap. Methodology: The present study has chosen a mixed- method approach. Both quantitative and qualitative methods of data collection have been used in the present study. The purposive sampling for data collection has been chosen. The data have been collected from 20% children in each age group of 04-08 years and 09-12 years spread over three primary schools and 20% of households to which they belong to which is spread over three slum communities in south district of Delhi. Results: The present study shows that despite of several behavior change interventions at school level, children still practice inappropriate WASH behaviors due to disconnect between school and family behaviors. These behaviors show variation from one age group to another. The inappropriate WASH behaviors being practiced by children include open defecation, wrong disposal of garbage, not keeping personal hygiene, not practicing hand washing practices during critical junctures and not washing fruits and vegetables before eating. The present study has highlighted that 80% of children in the age group of 04-08 years still practice inappropriate WASH behaviors when they go back to their families after school whereas, this percentage has reduced to 40% in case of children in the age group 09-12 years. Present study uncovers association between school and family teaching which creates a huge gap between WASH-related behavioral practices. The study has established that children learn and de-learn the WASH behaviors due to the evident disconnect between behavior change interventions at schools and household level. The study has also made it clear that children understand the significance of appropriate WASH practices but owing to the disconnect the behaviors remain unsettled. The study proposes several behavior change interventions to sync the behaviors of children at school and family level to ensure children’s health, well-being and cognitive performance.

Keywords: behavioral interventions, child health, family behavior, school behavior, WASH

Procedia PDF Downloads 113
1276 Other Cancers in Patients With Head and Neck Cancer

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancers (HNC) are often associated with the development of non-HNC primaries, as the risk factors that predispose patients to HNC are often risk factors for other cancers. Aim: We sought to evaluate whether there was an increased risk of smoking and alcohol-related cancers and also other cancers in HNC patients and to evaluate whether there is a difference between the rates of non-HNC primaries in Aboriginal compared with non-Aboriginal HNC patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single center in Western Australia, identifying 80 Aboriginal and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, treatments, outcomes, and past and subsequent HNCs and non-HNC primaries. Results: In the overall study population, there were 86 patients (26.9%) with a metachronous or synchronous non-HNC primary. Non-HNC primaries were actually significantly more common in the non-Aboriginal population compared with the Aboriginal population (30% vs. 17.5%, p=0.02); however, half of these were patients with cutaneous squamous or basal cell carcinomas (cSCC/BCC) only. When cSCC/BCCs were excluded, non-Aboriginal patients had a similar rate as Aboriginal patients (16.7% vs. 15%, p=0.73). There were clearly more cSCC/BCCs in non-Aboriginal patients compared with Aboriginal patients (16.7% vs. 2.5%, p=0.001) and more patients with melanoma (2.5% vs. 0%, p value not significant (p=NS). Rates of most cancers were similar between non-Aboriginal and Aboriginal patients, including prostate (2.9% vs. 3.8%), colorectal (2.9% vs. 2.5%), kidney (1.2% vs. 1.2%), and these rates appeared comparable to Australian Age Standardised Incidence Rates (ASIR) in the general community. Oesophageal cancer occurred at double the rate in Aboriginal patients (3.8%) compared with non-Aboriginal patients (1.7%), which was far in excess of ASIRs which estimated a lifetime risk of 0.59% in the general population. Interestingly lung cancer rates did not appear to be significantly increased in our cohort, with 2.5% of Aboriginal patients and 3.3% of non-Aboriginal patients having lung cancer, which is in line with ASIRs which estimates a lifetime risk of 5% (by age 85yo). Interestingly the rate of Glioma in the non-Aboriginal population was higher than the ASIR, with 0.8% of non-Aboriginal patients developing Glioma, with Australian averages predicting a 0.6% lifetime risk in the general population. As these are small numbers, this finding may well be due to chance. Unsurprisingly, second HNCs occurred at an increased incidence in our cohort, in 12.5% of Aboriginal patients and 11.2% of non-Aboriginal patients, compared to an ASIR of 17 cases per 100,000 persons, estimating a lifetime risk of 1.70%. Conclusions: Overall, 26.9% of patients had a non-HNC primary. When cSCC/BCCs were excluded, Aboriginal and non-Aboriginal patients had similar rates of non-HNC primaries, although non-Aboriginal patients had a significantly higher rate of cSCC/BCCs. Aboriginal patients had double the rate of oesophageal primaries; however, this was not statistically significant, possibly due to small case numbers.

Keywords: head and neck cancer, synchronous and metachronous primaries, other primaries, Aboriginal

Procedia PDF Downloads 78
1275 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 198
1274 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 67