Search results for: radial basis function networks
1081 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 5341080 Stress, Anxiety and Its Associated Factors Within the Transgender Population of Delhi: A Cross-Sectional Study
Authors: Annie Singh, Ishaan Singh
Abstract:
Background: Transgenders are people who have a gender identity different from their sex assigned at birth. Their gender behaviour doesn’t match their body anatomy. The community faces discrimination due to their gender identity all across the world. The term transgender is an umbrella term for many people non-conformal to their biological identity; note that the term transgender is different from gender dysphoria, which is a DSM-5 disorder defined as problems faced by an individual due to their non-conforming gender identity. Transgender people have been a part of Indian culture for ages yet have continued to face exclusion and discrimination in society. This has led to the low socio-economic status of the community. Various studies done across the world have established the role of discrimination, harassment and exclusion in the development of psychological disorders. The study is aimed to assess the frequency of stress and anxiety in the transgender population and understand the various factors affecting the same. Methodology: A cross-sectional survey of self consenting transgender individuals above the age of 18 residing in Delhi was done to assess their socioeconomic status and experiential ecology. Recruitment of participants was done with the help of NGOs. The survey was constructed GAD-7 and PSS-10, two well-known scales were used to assess the stress and anxiety levels. Medians, means and ranges are used for reporting continuous data wherever required, while frequencies and percentages are used for categorical data. For associations and comparison between groups in categorical data, the Chi-square test was used, while the Kruskal-Wallis H test was employed for associations involving multiple ordinal groups. SPSS v28.0 was used to perform the statistical analysis for this study. Results: The survey showed that the frequency of stress and anxiety is high in the transgender population. A demographic survey indicates a low socio-economic background. 44% of participants reported facing discrimination on a daily basis; the frequency of discrimination is higher in transwomen than in transmen. Stress and anxiety levels are similar among both transmen and transwomen. Only 34.5% of participants said they had receptive family or friends. The majority of participants (72.7%) reported a positive or neutral experience with healthcare workers. The prevalence of discrimination is significantly lower in the higher educated groups. Analysis of data shows a positive impact of acceptance and reception on mental health, while discrimination is correlated with higher levels of stress and anxiety. Conclusion: The prevalence of widespread transphobia and discrimination faced by the transgender community has culminated in high levels of stress and anxiety in the transgender population and shows variance according to multiple socio-demographic factors. Educating people about the LGBT community formation of support groups, policies and laws are required to establish trust and promote integration.Keywords: transgender, gender, stress, anxiety, mental health, discrimination, exclusion
Procedia PDF Downloads 1111079 Randomized Trial of Tian Jiu Therapy in San Fu Days for Patients with Chronic Asthma
Authors: Libing Zhu, Waichung Chen, Kwaicing Lo, Lei Li
Abstract:
Background: Tian Jiu Therapy (a medicinal vesiculation therapy according to traditional Chinese medicine theory) in San Fu Days (the three hottest days in a year is calculated by the Chinese ancient calendar) is widely used by patients with chronic asthma in China although from modern medicine perspective there is insufficient evidence of its effectiveness and safety issues. We investigated the efficacy and safety of Tian Jiu Therapy compared with placebo in patients with chronic asthma. Methods: Patients with chronic asthma were randomly assigned to Tian Jiu treatment group (n=165), placebo control group (n=158). Registered Chinese Medicine practitioners, in Orthopedics-Traumatology, Acupuncture, and Tui-na Clinical Centre for Teaching and Research, School of Chinese Medicine, The University of Hong Kong, administered Tian Jiu Therapy and placebo treatment in 3 times over 2 months. Patients completed questionnaires and lung function test before treatment and after treatment, 3, 6, 9, and 11 months, respectively. The primary outcome was the no of asthma-related sub-healthy symptoms and the percentage of patients with twenty-three symptoms. Results: 451 patients were recruited totally, 111 patients refused or did not participate according the appointment time and 17 did not meet the inclusion criteria. Consequently, 323 of eligible patients were enrolled. There was nothing difference between Tian Jiu Therapy group and placebo control group at the end of all treatments neither primary nor secondary outcomes. While Tian Jiu Therapy as compared with placebo significantly reduced the percentage of participants who are susceptible waken up by asthma symptoms from 27% to 14% at 2nd follow-up (P < 0.05). Similarly, Tian Jiu Therapy significantly reduced the proportion of participants who had the symptom of running nose and sneezing before onset from 18% to 8% at 2nd follow-up (P < 0.05). Additionally, Tian Jiu Therapy significantly reduced the level of asthma, the proportion of participants who don’t need to processed during asthma attack increased from 6% to 15% at 1st follow-up and 0% to 7% at 3rd follow-up (P < 0.05). Improvements also occurred with Tian Jiu Therapy group, it reduced the proportion of participants who were spontaneously sweating at 3rd follow up and diarrhea after intake of oily food at 4th follow-up (P < 0.05). Conclusion: When added to a regimen of foundational therapy for chronic asthma participants, Tian Jiu Therapy further reduced the need for medications to control asthma, improved the quality of participants’ life, and significantly reduced the level of asthma. What is more, this benefit seems to have an accumulative effect over time was in accordance with the TCM theory of 'winter disease is being cured in summer'.Keywords: asthma, Tian Jiu Therapy, San Fu Days, triaditional Chinese medicine, clinical trial
Procedia PDF Downloads 3141078 Conceptual Design of Gravity Anchor Focusing on Anchor Towing and Lowering
Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg
Abstract:
Wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis from the oil and gas industry. The unfolding of Universal heavyweight gravity anchor (UGA) for floating based foundation for floating Tension Leg Platform (TLP) sub-structures is developed in this research work. It is funded by the German Federal Ministry of Education and Research) for a three-year (2019-2022) research program called “Offshore Wind Solutions Plus (OWSplus) - Floating Offshore Wind Solutions Mecklenburg-Vorpommern.” It’s a group consists of German institutions (Universities, laboratories, and consulting companies). The part of the project is focused on the numerical modeling of gravity anchor that involves to analyze and solve fluid flow problems. Compared to gravity-based torpedo anchors, these UGA will be towed and lowered via controlled machines (tug boats) at lower speeds. This kind of installation of UGA are new to the offshore wind industry, particularly for TLP, and very few research works have been carried out in recent years. Conventional methods for transporting the anchor requires a large transportation crane vessel which involves a greater cost. This conceptual UGA anchors consists of ballasting chambers which utilizes the concept of buoyancy forces; the inside chambers are filled with the required amount of water in a way that they can float on the water for towing. After reaching the installation site, those chambers are ballasted with water for lowering. After it’s lifetime, these UGA can be unballasted (for erection or replacement) results in self-rising to the sea surface; buoyancy chambers give an advantage for using an UGA without the need of heavy machinery. However, while lowering/rising the UGA towards/away from the seabed, it experiences difficult, harsh marine environments due to the interaction of waves and currents. This leads to drifting of the anchor from the desired installation position and damage to the lowering machines. To overcome such harsh environments problems, a numerical model is built to investigate the influences of different outer contours and other fluid governing shapes that can be installed on the UGA to overcome the turbulence and drifting. The presentation will highlight the importance of the Computational Fluid Dynamics (CFD) numerical model in OpenFOAM, which is open-source programming software.Keywords: anchor lowering, towing, waves, currrents, computational fluid dynamics
Procedia PDF Downloads 1661077 Outcomes-Based Qualification Design and Vocational Subject Literacies: How Compositional Fallacy Short-Changes School-Leavers’ Literacy Development
Authors: Rose Veitch
Abstract:
Learning outcomes-based qualifications have been heralded as the means to raise vocational education and training (VET) standards, meet the needs of the changing workforce, and establish equivalence with existing academic qualifications. Characterized by explicit, measurable performance statements and atomistically specified assessment criteria, the outcomes model has been adopted by many VET systems worldwide since its inception in the United Kingdom in the 1980s. Debate to date centers on how the outcomes model treats knowledge. Flaws have been identified in terms of the overemphasis of end-points, neglect of process and a failure to treat curricula coherently. However, much of this censure has evaluated the outcomes model from a theoretical perspective; to date, there has been scant empirical research to support these criticisms. Various issues therefore remain unaddressed. This study investigates how the outcomes model impacts the teaching of subject literacies. This is of particular concern for subjects on the academic-vocational boundary such as Business Studies, since many of these students progress to higher education in the United Kingdom. This study also explores the extent to which the outcomes model is compatible with borderline vocational subjects. To fully understand if this qualification model is fit for purpose in the 16-18 year-old phase, it is necessary to investigate how teachers interpret their qualification specifications in terms of curriculum, pedagogy and assessment. Of particular concern is the nature of the interaction between the outcomes model and teachers’ understandings of their subject-procedural knowledge, and how this affects their capacity to embed literacy into their teaching. This present study is part of a broader doctoral research project which seeks to understand if and how content-area, disciplinary literacy and genre approaches can be adapted to outcomes-based VET qualifications. This qualitative research investigates the ‘what’ and ‘how’ of literacy embedding from the perspective of in-service teacher development in the 16-18 phase of education. Using ethnographic approaches, it is based on fieldwork carried out in one Further Education college in the United Kingdom. Emergent findings suggest that the outcomes model is not fit for purpose in the context of borderline vocational subjects. It is argued that the outcomes model produces inferior qualifications due to compositional fallacy; the sum of a subject’s components do not add up to the whole. Findings indicate that procedural knowledge, largely unspecified by some outcomes-based qualifications, is where subject-literacies are situated, and that this often gets lost in ‘delivery’. It seems that the outcomes model provokes an atomistic treatment of knowledge amongst teachers, along with the privileging of propositional knowledge over procedural knowledge. In other words, outcomes-based VET is a hostile environment for subject-literacy embedding. It is hoped that this research will produce useful suggestions for how this problem can be ameliorated, and will provide an empirical basis for the potential reforms required to address these issues in vocational education.Keywords: literacy, outcomes-based, qualification design, vocational education
Procedia PDF Downloads 121076 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves
Procedia PDF Downloads 881075 Transformation in Palliative Care Delivery in Surgery
Authors: W. L. Tsang, H. Y. Li, S. L. Wong, T. Y. Kwok, S. C. Yuen, S. S. Kwok, P. S. Ko, S. Y. Lau
Abstract:
Introduction: Palliative care is no doubt necessary in surgery. When one looks at studies of what patients with life-threatening illness want and compares to what they experience in surgical units, the gap is huge. Surgical nurses, being patient advocates, should engage with patients and families sooner rather than later in their illness trajectories to consider how to manage the illness, not just their capacity to survive. Objective: This clinical practice guide aims to fill the service gap of palliative care in surgery by producing a quality-driven, evidence-based yet straightforward clinical practice guide based on a focus strategy. Methodology: In line with Guide to Good Nursing Practice: End-of-Life Care recommended by Nursing Council of Hong Kong and the strategic goal of improving quality of palliative care proposed in HA Strategic Plan 2017-2022, multiple phases of work were undertaken from July 2015 to December 2017. A pragmatic clinical practice guide for surgical patients facing life-threatening conditions was developed based on assessments on knowledge of and attitudes towards end-of-life care of surgical nurses. Key domains, including preparation for bereavement, nursing care for imminently dying patients and at the dying scene were crystallized according to the results of the assessments and the palliative care checklist formulated by UCH Palliative Care Team. After a year of rollout, its content was refined through analyses of implementation in routine practice and consensus opinions from frontline nurses. Results and Outcomes: This clinical practice guide inspires surgical nurses with the art of care to provide for patients’ comfort, function, and longevity. It provides practical directions and assists nurses to master the skills on advance care planning and learn how to be clear with patients, families and themselves about the realities of the disease pictures. Through the implementation, patients and families are included in the decision process, and their wishes are honored. The delivery of explicit and high-quality palliative care maintains good nurse-to-patient relations and enhances satisfaction of hospital care of patients and families. Conclusion: Surgical nursing has always been up to the unique challenges of the era. This clinical practice guide has become an island of credibility for our nurses as they traverse the often stormy waters of life-limiting illness.Keywords: palliative care delivery, palliative care in surgery, hospice care, end-of-life care
Procedia PDF Downloads 2571074 HIV-1 Nef Mediates Host Invasion by Differential Expression of Alpha-Enolase
Authors: Reshu Saxena, R. K. Tripathi
Abstract:
HIV-1 transmission and spread involves significant host-virus interaction. Potential targets for prevention of HIV-1 lies at the site of mucosal barriers. Thus a better understanding of how HIV-1 infects target cells at such sites and lead their invasion is required, with prime focus on the host determinants regulating HIV-1 spread. HIV-1 Nef is important for viral infectivity and pathogenicity. It promotes HIV-1 replication, facilitating immune evasion by interacting with various host factors and altering cellular pathways via multiple protein-protein interactions. In this study nef was sequenced from HIV-1 patients, and showed specific mutations revealing sequence variability in nef. To explore the difference in Nef functionality based on sequence variability we have studied the effects of HIV-1 Nef in human SupT1 T cell line and (THP-1) monocyte-macrophage cell lines through proteomics approach. 2D-Gel Electrophoresis in control and Nef-transfected SupT1 cells demonstrated several differentially expressed proteins with significant modulation of alpha-enolase. Through further studies, effects of Nef on alpha-enolase regulation were found to be cell lineage-specific, being stimulatory in macrophages/monocytes, inhibitory in T cells and without effect in HEK-293 cells. Cell migration and invasion studies were employed to determine biological function affected by Nef mediated regulation of alpha-enolase. Cell invasion was enhanced in THP-1 cells but was inhibited in SupT1 cells by wildtype nef. In addition, the modulation of enolase and cell invasion remained unaffected by a unique nef variant. These results indicated that regulation of alpha-enolase expression and invasive property of host cells by Nef is sequence specific, suggesting involvement of a particular motif of Nef. To precisely determine this site, we designed a heptapeptide including the suggested alpha-enolase regulating sequence of nef and a nef mutant with deletion of this site. Macrophages/monocytes being the major cells affected by HIV-1 at mucosal barriers, were particularly investigated by the nef mutant and peptide. Both the nef mutant and heptapeptide led to inhibition of enhanced enolase expression and increased invasiveness in THP-1 cells. Together, these findings suggest a possible mechanism of host invasion by HIV-1 through Nef mediated regulation of alpha-enolase and identifies a potential therapeutic target for HIV-1 entry at mucosal barriers.Keywords: HIV-1 Nef, nef variants, host-virus interaction, tissue invasion
Procedia PDF Downloads 4111073 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area
Authors: Michelle Eliane Hernández-García, Angélica Lozano
Abstract:
The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks
Procedia PDF Downloads 1301072 Leadership and Entrepreneurship in Higher Education: Fostering Innovation and Sustainability
Authors: Naziema Begum Jappie
Abstract:
Leadership and entrepreneurship in higher education have become critical components in navigating the evolving landscape of academia in the 21st century. This abstract explores the multifaceted relationship between leadership and entrepreneurship within the realm of higher education, emphasizing their roles in fostering innovation and sustainability. Higher education institutions, often characterized as slow-moving and resistant to change, are facing unprecedented challenges. Globalization, rapid technological advancements, changing student demographics, and financial constraints necessitate a reimagining of traditional models. Leadership in higher education must embrace entrepreneurial thinking to effectively address these challenges. Entrepreneurship in higher education involves cultivating a culture of innovation, risk-taking, and adaptability. Visionary leaders who promote entrepreneurship within their institutions empower faculty and staff to think creatively, seek new opportunities, and engage with external partners. These entrepreneurial efforts lead to the development of novel programs, research initiatives, and sustainable revenue streams. Innovation in curriculum and pedagogy is a central aspect of leadership and entrepreneurship in higher education. Forward-thinking leaders encourage faculty to experiment with teaching methods and technology, fostering a dynamic learning environment that prepares students for an ever-changing job market. Entrepreneurial leadership also facilitates the creation of interdisciplinary programs that address emerging fields and societal challenges. Collaboration is key to entrepreneurship in higher education. Leaders must establish partnerships with industry, government, and non-profit organizations to enhance research opportunities, secure funding, and provide real-world experiences for students. Entrepreneurial leaders leverage their institutions' resources to build networks that extend beyond campus boundaries, strengthening their positions in the global knowledge economy. Financial sustainability is a pressing concern for higher education institutions. Entrepreneurial leadership involves diversifying revenue streams through innovative fundraising campaigns, partnerships, and alternative educational models. Leaders who embrace entrepreneurship are better equipped to navigate budget constraints and ensure the long-term viability of their institutions. In conclusion, leadership and entrepreneurship are intertwined elements essential to the continued relevance and success of higher education institutions. Visionary leaders who champion entrepreneurship foster innovation, enhance the student experience, and secure the financial future of their institutions. As academia continues to evolve, leadership and entrepreneurship will remain indispensable tools in shaping the future of higher education. This abstract underscores the importance of these concepts and their potential to drive positive change within the higher education landscape.Keywords: entrepreneurship, higher education, innovation, leadership
Procedia PDF Downloads 681071 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development
Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan
Abstract:
Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.Keywords: cost, feasibility, rural electrification, small-scale hydropower
Procedia PDF Downloads 2241070 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 311069 Morphological and Chemical Characterization of the Surface of Orthopedic Implant Materials
Authors: Bertalan Jillek, Péter Szabó, Judit Kopniczky, István Szabó, Balázs Patczai, Kinga Turzó
Abstract:
Hip and knee prostheses are one of the most frequently used medical implants, that can significantly improve patients’ quality of life. Long term success and biointegration of these prostheses depend on several factors, like bulk and surface characteristics, construction and biocompatibility of the material. The applied surgical technique, the general health condition and life-quality of the patient are also determinant factors. Medical devices used in orthopedic surgeries have different surfaces depending on their function inside the human body. Surface roughness of these implants determines the interaction with the surrounding tissues. Numerous modifications have been applied in the recent decades to improve a specific property of an implant. Our goal was to compare the surface characteristics of typical implant materials used in orthopedic surgery and traumatology. Morphological and chemical structure of Vortex plate anodized titanium, cemented THR (total hip replacement) stem high nitrogen REX steel (SS), uncemented THR stem and cup titanium (Ti) alloy with titanium plasma spray coating (TPS), cemented cup and uncemented acetabular liner HXL and UHMWPE and TKR (total knee replacement) femoral component CoCrMo alloy (Sanatmetal Ltd, Hungary) discs were examined. Visualization and elemental analysis were made by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). Surface roughness was determined by atomic force microscopy (AFM) and profilometry. SEM and AFM revealed the morphological and roughness features of the examined materials. TPS Ti presented the highest Ra value (25 ± 2 μm, followed by CoCrMo alloy (535 ± 19 nm), Ti (227 ± 15 nm) and stainless steel (170 ± 11 nm). The roughness of the HXL and UHMWPE surfaces was in the same range, 147 ± 13 nm and 144 ± 15 nm, respectively. EDS confirmed typical elements on the investigated prosthesis materials: Vortex plate Ti (Ti, O, P); TPS Ti (Ti, O, Al); SS (Fe, Cr, Ni, C) CoCrMo (Co, Cr, Mo), HXL (C, Al, Ni) and UHMWPE (C, Al). The results indicate that the surface of prosthesis materials have significantly different features and the applied investigation methods are suitable for their characterization. Contact angle measurements and in vitro cell culture testing are further planned to test their surface energy characteristics and biocompatibility.Keywords: morphology, PE, roughness, titanium
Procedia PDF Downloads 1261068 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India
Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari
Abstract:
The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya
Procedia PDF Downloads 721067 Carbon Capture and Storage by Continuous Production of CO₂ Hydrates Using a Network Mixing Technology
Authors: João Costa, Francisco Albuquerque, Ricardo J. Santos, Madalena M. Dias, José Carlos B. Lopes, Marcelo Costa
Abstract:
Nowadays, it is well recognized that carbon dioxide emissions, together with other greenhouse gases, are responsible for the dramatic climate changes that have been occurring over the past decades. Gas hydrates are currently seen as a promising and disruptive set of materials that can be used as a basis for developing new technologies for CO₂ capture and storage. Its potential as a clean and safe pathway for CCS is tremendous since it requires only water and gas to be mixed under favorable temperatures and mild high pressures. However, the hydrates formation process is highly exothermic; it releases about 2 MJ per kilogram of CO₂, and it only occurs in a narrow window of operational temperatures (0 - 10 °C) and pressures (15 to 40 bar). Efficient continuous hydrate production at a specific temperature range necessitates high heat transfer rates in mixing processes. Past technologies often struggled to meet this requirement, resulting in low productivity or extended mixing/contact times due to inadequate heat transfer rates, which consistently posed a limitation. Consequently, there is a need for more effective continuous hydrate production technologies in industrial applications. In this work, a network mixing continuous production technology has been shown to be viable for producing CO₂ hydrates. The structured mixer used throughout this work consists of a network of unit cells comprising mixing chambers interconnected by transport channels. These mixing features result in enhanced heat and mass transfer rates and high interfacial surface area. The mixer capacity emerges from the fact that, under proper hydrodynamic conditions, the flow inside the mixing chambers becomes fully chaotic and self-sustained oscillatory flow, inducing intense local laminar mixing. The device presents specific heat transfer rates ranging from 107 to 108 W⋅m⁻³⋅K⁻¹. A laboratory scale pilot installation was built using a device capable of continuously capturing 1 kg⋅h⁻¹ of CO₂, in an aqueous slurry of up to 20% in mass. The strong mixing intensity has proven to be sufficient to enhance dissolution and initiate hydrate crystallization without the need for external seeding mechanisms and to achieve, at the device outlet, conversions of 99% in CO₂. CO₂ dissolution experiments revealed that the overall liquid mass transfer coefficient is orders of magnitude larger than in similar devices with the same purpose, ranging from 1 000 to 12 000 h⁻¹. The present technology has shown itself to be capable of continuously producing CO₂ hydrates. Furthermore, the modular characteristics of the technology, where scalability is straightforward, underline the potential development of a modular hydrate-based CO₂ capture process for large-scale applications.Keywords: network, mixing, hydrates, continuous process, carbon dioxide
Procedia PDF Downloads 521066 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials
Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar
Abstract:
Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)
Procedia PDF Downloads 3891065 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus
Procedia PDF Downloads 1861064 Monitoring of Indoor Air Quality in Museums
Authors: Olympia Nisiforou
Abstract:
The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.Keywords: exibitions, indoor air quality , VOCs, pollution
Procedia PDF Downloads 1231063 Pediatric Drug Resistance Tuberculosis Pattern, Side Effect Profile and Treatment Outcome: North India Experience
Authors: Sarika Gupta, Harshika Khanna, Ajay K Verma, Surya Kant
Abstract:
Background: Drug-resistant tuberculosis (DR-TB) is a growing health challenge to global TB control efforts. Pediatric DR-TB is one of the neglected infectious diseases. In our previously published report, we have notified an increased prevalence of DR-TB in the pediatric population at a tertiary health care centre in North India which was estimated as 17.4%, 15.1%, 18.4%, and 20.3% in (%) in the year 2018, 2019, 2020, and 2021. Limited evidence exists about a pattern of drug resistance, side effect profile and programmatic outcomes of Paediatric DR-TB treatment. Therefore, this study was done to find out the pattern of resistance, side effect profile and treatment outcome. Methodology: This was a prospective cohort study conducted at the nodal drug-resistant tuberculosis centre of a tertiary care hospital in North India from January 2021 to December 2022. Subjects included children aged between 0-18 years of age with a diagnosis of DR-TB, on the basis of GeneXpert (rifampicin [RIF] resistance detected), line probe assay and drug sensitivity testing (DST) of M. tuberculosis (MTB) grown on a culture of body fluids. Children were classified as monoresistant TB, polyresistant TB (resistance to more than 1 first-line anti-TB drug, other than both INH and RIF), MDR-TB, pre-XDR-TB and XDR-TB, as per the WHO classification. All the patients were prescribed DR TB treatment as per the standard guidelines, either shorter oral DR-TB regimen or a longer all-oral MDR/XDR-TB regimen (age below five years needed modification). All the patients were followed up for side effects of treatment once per month. The patient outcomes were categorized as good outcomes if they had completed treatment and cured or were improving during the course of treatment, while bad outcomes included death or not improving during the course of treatment. Results: Of the 50 pediatric patients included in the study, 34 were females (66.7%) and 16 were male (31.4%). Around 33 patients (64.7%) were suffering from pulmonary TB, while 17 (33.3%) were suffering from extrapulmonary TB. The proportions of monoresistant TB, polyresistant TB, MDR-TB, pre-XDR-TB and XDR-TB were 2.0%, 0%, 50.0%, 30.0% and 18.0%, respectively. Good outcome was reported in 40 patients (80.0%). The 10 bad outcomes were 7 deaths (14%) and 3 (6.0%) children who were not improving. Adverse events (single or multiple) were reported in all the patients, most of which were mild in nature. The most common adverse events were metallic taste 16(31.4%), rash and allergic reaction 15(29.4%), nausea and vomiting 13(26.0%), arthralgia 11 (21.6%) and alopecia 11 (21.6%). Serious adverse event of QTc prolongation was reported in 4 cases (7.8%), but neither arrhythmias nor symptomatic cardiac side effects occurred. Vestibular toxicity was reported in 2(3.9%), and psychotic symptoms in 4(7.8%). Hepatotoxicity, hypothyroidism, peripheral neuropathy, gynaecomastia, and amenorrhea were reported in 2 (4.0%), 4 (7.8%), 2 (3.9%), 1(2.0%), and 2 (3.9%) respectively. None of the drugs needed to be withdrawn due to uncontrolled adverse events. Conclusion: Paediatric DR TB treatment achieved favorable outcomes in a large proportion of children. DR TB treatment regimen drugs were overall well tolerated in this cohort.Keywords: pediatric, drug-resistant, tuberculosis, adverse events, treatment
Procedia PDF Downloads 661062 Effects of Renin Angiotensin Pathway Inhibition on Efficacy of Anti-PD-1/PD-L1 Treatment in Metastatic Cancer
Authors: Philip Friedlander, John Rutledge, Jason Suh
Abstract:
Inhibition of programmed death-1 (PD-1) or its ligand PD-L1 confers therapeutic efficacy in a wide range of solid tumor malignancies. Primary or acquired resistance can develop through activation of immunosuppressive immune cells such as tumor-associated macrophages. The renin angiotensin system (RAS) systemically regulates fluid and sodium hemodynamics, but components are expressed on and regulate the activity of immune cells, particularly of myeloid lineage. We hypothesized that inhibition of RAS would improve the efficacy of PD-1/PD-L-1 treatment. A retrospective analysis was performed through a chart review of patients with solid metastatic malignancies treated with a PD-1/PD-L1 inhibitor between 1/2013 and 6/2019 at Valley Hospital, a community hospital in New Jersey, USA. Efficacy was determined by medical oncologist documentation of clinical benefit in visit notes and by the duration of time on immunotherapy treatment. The primary endpoint was the determination of efficacy differences in patients treated with an inhibitor of RAS ( ace inhibitor, ACEi, or angiotensin blocker, ARB) compared to patients not treated with these inhibitors. To control for broader antihypertensive effects, efficacy as a function of treatment with beta blockers was assessed. 173 patients treated with PD-1/PD-L-1 inhibitors were identified of whom 52 were also treated with an ACEi or ARB. Chi-square testing revealed a statistically significant relationship between being on an ACEi or ARB and efficacy to PD-1/PD-L-1 therapy (p=0.001). No statistically significant relationship was seen between patients taking or not taking beta blocker antihypertensives (p= 0.33). Kaplan-Meier analysis showed statistically significant improvement in the duration of therapy favoring patients concomitantly treated with ACEi or ARB compared to patients not exposed to antihypertensives and to those treated with beta blockers. Logistic regression analysis revealed that age, gender, and cancer type did not have significant effects on the odds of experiencing clinical benefit (p=0.74, p=0.75, and p=0.81, respectively). We conclude that retrospective analysis of the treatment of patients with solid metastatic tumors with anti-PD-1/PD-L1 in a community setting demonstrates greater clinical benefit in the context of concomitant ACEi or ARB inhibition, irrespective of gender or age. This data supports the development of prospective assessment through randomized clinical trials.Keywords: angiotensin, cancer, immunotherapy, PD-1, efficacy
Procedia PDF Downloads 761061 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 1671060 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective
Authors: Pardis Moslemzadeh Tehrani
Abstract:
Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.Keywords: blockchain, supply chain, IoT, smart contract
Procedia PDF Downloads 1271059 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 3661058 An ICF Framework for Game-Based Experiences in Geriatric Care
Authors: Marlene Rosa, Susana Lopes
Abstract:
Board games have been used for different purposes in geriatric care, demonstrating good results for health in general. However, there is not a conceptual framework that can help professionals and researchers in this area to design intervention programs or to think about future studies in this area. The aim of this study was to provide a pilot collection of board games’ serious purposes in geriatric care, using a WHO framework for health and disability. Study cases were developed in seven geriatric residential institutions from the center region in Portugal that are included in AGILAB program. The AGILAB program is a serious game-based method to train and spread out the implementation of board games in geriatric care. Each institution provides 2-hours/week of experiences using TATI Hand Game for serious purposes and then fulfill questions about a study-case (player characteristics; explain changes in players health according to this game experience). Two independent researchers read the information and classified it according to the International Classification for Functioning and Disability (ICF) categories. Any discrepancy was solved in a consensus meeting. Results indicate an important variability in body functions and structures: specific mental functions (e.g., b140 Attention functions, b144 Memory functions), b156 Perceptual functions, b2 sensory functions and pain (e.g., b230 Hearing functions; b265 Touch function; b280 Sensation of pain), b7 neuromusculoskeletal and movement-related functions (e.g., b730 Muscle power functions; b760 Control of voluntary movement functions; b710 Mobility of joint functions). Less variability was found in activities and participation domains, such as purposeful sensory experiences (d110-d129) (e.g., d115 Listening), communication (d3), d710 basic interpersonal interactions, d920 recreation and leisure (d9200 Play; d9205 Socializing). Concluding, this framework designed from a brief gamed-based experience includes mental, perceptual, sensory, neuromusculoskeletal, and movement-related functions and participation in sensory, communication, and leisure domains. More studies, including different experiences and a high number of users, should be developed to provide a more comprehensive ICF framework for game-based experiences in geriatric care.Keywords: board game, aging, framework, experience
Procedia PDF Downloads 1261057 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 4491056 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 391055 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study
Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros
Abstract:
This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.Keywords: asset management, PV module, optimization, maintenance
Procedia PDF Downloads 531054 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 1251053 Research on Reducing Food Losses by Extending the Date of Minimum Durability on the Example of Cereal Products
Authors: Monika Trzaskowska, Dorota Zielinska, Anna Lepecka, Katarzyna Neffe-Skocinska, Beata Bilska, Marzena Tomaszewska, Danuta Kolozyn-Krajewska
Abstract:
Microbiological quality and food safety are important food characteristics. Regulation (EU) No 1169/2011 of the European Parliament and of the Council on the provision of food information to consumers introduces the obligation to provide information on the 'use-by' date or the date of minimum durability (DMD). The second term is the date until which the properly stored or transported foodstuff retains its physical, chemical, microbiological and organoleptic properties. The date should be preceded by 'best before'. It is used for durable products, e.g., pasta. In relation to reducing food losses, the question may be asked whether products with the date of minimum durability currently declared retain quality and safety beyond this. The aim of the study was to assess the sensory quality and microbiological safety of selected cereal products, i.e., pasta and millet after DMD. The scope of the study was to determine the markers of microbiological quality, i.e., the total viable count (TVC), the number of bacteria from the Enterobacteriaceae family and the number of yeast and mold (TYMC) on the last day of DMD and after 1 and 3 months of storage. In addition, the presence of Salmonella and Listeria monocytogenes was examined on the last day of DMD. The sensory quality of products was assessed by quantitative descriptive analysis (QDA), the intensity of 14 differentiators and overall quality were defined and determined. In the tested samples of millet and pasta, no pathogenic bacteria Salmonella and Listeria monocytogenes were found. The value of the distinguishing features of selected quality and microbiological safety indicators on the last DMD day was in the range of about 3-1 log cfu/g. This demonstrates the good microbiological quality of the tested food. Comparing the products, a higher number of microorganisms was found in the samples of millet. After 3 months of storage, TVC decreased in millet, while in pasta, it was found to increase in value. In both products, the number of bacteria from the Enterobacretiaceae family decreased. In contrast, the number of TYMCs increased in samples of millet, and in pasta decreased. The intensity of sensory characteristic in the studied period varied. It remained at a similar level or increased. Millet was found to increase the intensity and flavor of 'cooked porridge' 3 months after DMD. Similarly, in the pasta, the smell and taste of 'cooked pasta' was more intense. To sum up, the researched products on the last day of the minimum durability date were characterized by very good microbiological and sensory quality, which was maintained for 3 months after this date. Based on these results, the date of minimum durability of tested products could be extended. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program 'social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM'.Keywords: date of minimum durability, food losses, food quality and safety, millet, pasta
Procedia PDF Downloads 1611052 Opto-Thermal Frequency Modulation of Phase Change Micro-Electro-Mechanical Systems
Authors: Syed A. Bukhari, Ankur Goswmai, Dale Hume, Thomas Thundat
Abstract:
Here we demonstrate mechanical detection of photo-induced Insulator to metal transition (MIT) in ultra-thin vanadium dioxide (VO₂) micro strings by using < 100 µW of optical power. Highly focused laser beam heated the string locally resulting in through plane and along axial heat diffusion. Localized temperature increase can cause temperature rise > 60 ºC. The heated region of VO₂ can transform from insulating (monoclinic) to conducting (rutile) phase leading to lattice compressions and stiffness increase in the resonator. The mechanical frequency of the resonator can be tuned by changing optical power and wavelength. The first mode resonance frequency was tuned in three different ways. A decrease in frequency below a critical optical power, a large increase between 50-120 µW followed by a large decrease in frequency for optical powers greater than 120 µW. The dynamic mechanical response was studied as a function of incident optical power and gas pressure. The resonance frequency and amplitude of vibration were found to be decreased with increasing laser power from 25-38 µW and increased by1-2 % when the laser power was further increased to 52 µW. The transition in films was induced and detected by a single pump and probe source and by employing external optical sources of different wavelengths. This trend in dynamic parameters of the strings can be co-related with reversible Insulator to metal transition in VO₂ films which creates change in density of the material and hence the overall stiffness of the strings leading to changes in string dynamics. The increase in frequency at a particular optical power manifests a transition to a more ordered metallic phase which tensile stress onto the string. The decrease in frequency at higher optical powers can be correlated with poor phonon thermal conductivity of VO₂ in conducting phase. Poor thermal conductivity of VO₂ can force in-plane penetration of heat causing the underneath SiN supporting VO₂ which can result as a decrease in resonance frequency. This noninvasive, non-contact laser-based excitation and detection of Insulator to metal transition using micro strings resonators at room temperature and with laser power in few µWs is important for low power electronics, and optical switching applications.Keywords: thermal conductivity, vanadium dioxide, MEMS, frequency tuning
Procedia PDF Downloads 120