Search results for: 3D magnetic potential vector and electric scalar potential (A
197 Simulation, Design, and 3D Print of Novel Highly Integrated TEG Device with Improved Thermal Energy Harvest Efficiency
Abstract:
Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.Keywords: thermoelectric, finite element method, 3d print, energy conversion
Procedia PDF Downloads 62196 Characterization of the Lytic Bacteriophage VbɸAB-1 against Drug Resistant Acinetobacter baumannii Isolated from Hospitalized Pressure Ulcers Patients
Authors: M. Doudi, M. H. Pazandeh, L. Rahimzadeh Torabi
Abstract:
Bedsores are pressure ulcers that occur on the skin or tissue due to being immobile and lying in bed for extended periods. Bedsores have the potential to progress into open ulcers, increasing the possibility of variety of bacterial infection. Acinetobacter baumannii, a pathogen of considerable clinical importance, exhibited a significant correlation with Bedsores (pressure ulcers) infections, thereby manifesting a wide spectrum of antibiotic resistance. The emergence of drug resistance has led researchers to focus on alternative methods, particularly phage therapy, for tackling bacterial infections. Phage therapy has emerged as a novel therapeutic approach to regulate the activity of these agents. The management of bacterial infections greatly benefits from the clinical utilization of bacteriophages as a valuable antimicrobial intervention. The primary objective of this investigation consisted of isolating and discerning potent bacteriophage capable of targeting multi drug-resistant (MDR) and extensively drug-resistant (XDR) bacteria obtained from pressure ulcers. In present study, analyzed and isolated A. baumannii strains obtained from a cohort of patients suffering from pressure ulcers at Taleghani Hospital in Ahvaz, Iran. An approach that included biochemical and molecular identification techniques was used to determine the taxonomic classification of bacterial isolates at the genus and species levels. The molecular identification process was facilitated by using the 16S rRNA gene in combination with universal primers 27 F, and 1492 R. Bacteriophage was obtained through the isolation process conducted on treatment plant sewage located in Isfahan, Iran. The main goal of this study was to evaluate different characteristics of phage, such as their appearance, range of hosts they can infect, how quickly they can enter a host, their stability at varying temperatures and pH levels, their effectiveness in killing bacteria, the growth pattern of a single phage stage, mapping of enzymatic digestion, and identification of proteomics patterns. The findings demonstrated that an examination was conducted on a sample of 50 specimens, wherein 15 instances of A. baumannii were identified. These microorganisms are the predominant Gram-negative agents known to cause wound infections in individuals suffering from bedsores. The study's findings indicated a high prevalence of antibiotic resistance in the strains isolated from pressure ulcers, excluding the clinical strains that exhibited responsiveness to colistin.According to the findings obtained from assessments of host range and morphological characteristics of bacteriophage VbɸAB-1, it can be concluded that this phage possesses specificity towards A. Baumannii BAH_Glau1001 was classified as a member of the Plasmaviridae family. The bacteriophage mentioned earlier showed the strongest antibacterial effect at a temperature of 18 °C and a pH of 6.5. Through the utilization of sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis on protein fragments, it was established that the bacteriophage VbɸAB-1 exhibited a size range between 50 and 75 kilodaltons (KDa). The numerous research findings on the effectiveness of phages and the safety studies conducted suggest that the phages studied in this research can be considered as a practical solution and recommended approach for controlling and treating stubborn pathogens in burn wounds among hospitalized patients.Keywords: acinetobacter baumannii, extremely drug- resistant, phage therapy, surgery wound
Procedia PDF Downloads 92195 A Model for Analysing Argumentative Structures and Online Deliberation in User-Generated Comments to the Website of a South African Newspaper
Authors: Marthinus Conradie
Abstract:
The conversational dynamics of democratically orientated deliberation continue to stimulate critical scholarship for its potential to bolster robust engagement between different sections of pluralist societies. Several axes of deliberation that have attracted academic attention include face-to-face vs. online interaction, and citizen-to-citizen communication vs. engagement between citizens and political elites. In all these areas, numerous researchers have explored deliberative procedures aimed at achieving instrumental goals such a securing consensus on policy issues, against procedures that prioritise expressive outcomes such as broadening the range of argumentative repertoires that discursively construct and mediate specific political issues. The study that informs this paper, works in the latter stream. Drawing its data from the reader-comments section of a South African broadsheet newspaper, the study investigates online, citizen-to-citizen deliberation by analysing the discursive practices through which competing understandings of social problems are articulated and contested. To advance this agenda, the paper deals specifically with user-generated comments posted in response to news stories on questions of race and racism in South Africa. The analysis works to discern and interpret the various sets of discourse practices that shape how citizens deliberate contentious political issues, especially racism. Since the website in question is designed to encourage the critical comparison of divergent interpretations of news events, without feeding directly into national policymaking, the study adopts an analytic framework that traces how citizens articulate arguments, rather than the instrumental effects that citizen deliberations might exert on policy. The paper starts from the argument that such expressive interactions are particularly crucial to current trends in South African politics, given that the precise nature of race and racism remain contested and uncertain. Centred on a sample of 2358 conversational moves in 814 posts to 18 news stories emanating from issues of race and racism, the analysis proceeds in a two-step fashion. The first stage conducts a qualitative content analysis that offers insights into the levels of reciprocity among commenters (do readers engage with each other or simply post isolated opinions?), as well as the structures of argumentation (do readers support opinions by citing evidence?). The second stage involves a more fine-grained discourse analysis, based on a theorisation of argumentation that delineates it into three components: opinions/conclusions, evidence/data to support opinions/conclusions and warrants that explicate precisely how evidence/data buttress opinions/conclusions. By tracing the manifestation and frequency of specific argumentative practices, this study contributes to the archive of research currently aggregating around the practices that characterise South Africans’ engagement with provocative political questions, especially racism and racial inequity. Additionally, the study also contributes to recent scholarship on the affordances of Web 2.0 software by eschewing a simplistic bifurcation between cyber-optimist vs. pessimism, in favour of a more nuanced and context-specific analysis of the patterns that structure online deliberation.Keywords: online deliberation, discourse analysis, qualitative content analysis, racism
Procedia PDF Downloads 177194 Tangible Losses, Intangible Traumas: Re-envisioning Recovery Following the Lytton Creek Fire 2021 through Place Attachment Lens
Authors: Tugba Altin
Abstract:
In an era marked by pronounced climate change consequences, communities are observed to confront traumatic events that yield both tangible and intangible repercussions. Such events not only cause discernible damage to the landscape but also deeply affect the intangible aspects, including emotional distress and disruptions to cultural landscapes. The Lytton Creek Fire of 2021 serves as a case in point. Beyond the visible destruction, the less overt but profoundly impactful disturbance to place attachment (PA) is scrutinized. PA, representing the emotional and cognitive bonds individuals establish with their environments, is crucial for understanding how such events impact cultural identity and connection to the land. The study underscores the significance of addressing both tangible and intangible traumas for holistic community recovery. As communities renegotiate their affiliations with altered environments, the cultural landscape emerges as instrumental in shaping place-based identities. This renewed understanding is pivotal for reshaping adaptation planning. The research advocates for adaptation strategies rooted in the lived experiences and testimonies of the affected populations. By incorporating both the tangible and intangible facets of trauma, planning efforts are suggested to be more culturally attuned and emotionally insightful, fostering true resonance with the affected communities. Through such a comprehensive lens, this study contributes enriching the climate change discourse, emphasizing the intertwined nature of tangible recovery and the imperative of emotional and cultural healing after environmental disasters. Following the pronounced aftermath of the Lytton Creek Fire in 2021, research aims to deeply understand its impact on place attachment (PA), encompassing the emotional and cognitive bonds individuals form with their environments. The interpretive phenomenological approach, enriched by a hermeneutic framework, is adopted, emphasizing the experiences of the Lytton community and co-researchers. Phenomenology informed the understanding of 'place' as the focal point of attachment, providing insights into its formation and evolution after traumatic events. Data collection departs from conventional methods. Instead of traditional interviews, walking audio sessions and photo elicitation methods are utilized. These allow co-researchers to immerse themselves in the environment, re-experience, and articulate memories and feelings in real-time. Walking audio facilitates reflections on spatial narratives post-trauma, while photo voices captured intangible emotions, enabling the visualization of place-based experiences. The analysis is collaborative, ensuring the co-researchers' experiences and interpretations are central. Emphasizing their agency in knowledge production, the process is rigorous, facilitated by the harmonious blend of interpretive phenomenology and hermeneutic insights. The findings underscore the need for adaptation and recovery efforts to address emotional traumas alongside tangible damages. By exploring PA post-disaster, the research not only fills a significant gap but advocates for an inclusive approach to community recovery. Furthermore, the participatory methodologies employed challenge traditional research paradigms, heralding potential shifts in qualitative research norms.Keywords: wildfire recovery, place attachment, trauma recovery, cultural landscape, visual methodologies
Procedia PDF Downloads 91193 Evaluation of the Incorporation of Modified Starch in Puff Pastry Dough by Mixolab Rheological Analysis
Authors: Alejandra Castillo-Arias, Carlos A. Fuenmayor, Carlos M. Zuluaga-Domínguez
Abstract:
The connection between health and nutrition has driven the food industry to explore healthier and more sustainable alternatives. Key strategies to enhance nutritional quality and extend shelf life include reducing saturated fats and incorporating natural ingredients. One area of focus is the use of modified starch in baked goods, which has attracted significant interest in food science and industry due to its functional benefits. Modified starches are commonly used for their gelling, thickening, and water-retention properties. Derived from sources like waxy corn, potatoes, tapioca, or rice, these polysaccharides improve thermal stability and resistance to dough. The use of modified starch enhances the texture and structure of baked goods, which is crucial for consumer acceptance. In this study, it was evaluated the effects of modified starch inclusion on dough used for puff pastry elaboration, measured with Mixolab analysis. This technique assesses flour quality by examining its behavior under varying conditions, providing a comprehensive profile of its baking properties. The analysis included measurements of water absorption capacity, dough development time, dough stability, softening, final consistency, and starch gelatinization. Each of these parameters offers insights into how the flour will perform during baking and the quality of the final product. The performance of wheat flour with varying levels of modified starch inclusion (10%, 20%, 30%, and 40%) was evaluated through Mixolab analysis, with a control sample consisting of 100% wheat flour. Water absorption, gluten content, and retrogradation indices were analyzed to understand how modified starch affects dough properties. The results showed that the inclusion of modified starch increased the absorption index, especially at levels above 30%, indicating a dough with better handling qualities and potentially improved texture in the final baked product. However, the reduction in wheat flour resulted in a lower kneading index, affecting dough strength. Conversely, incorporating more than 20% modified starch reduced the retrogradation index, indicating improved stability and resistance to crystallization after cooling. Additionally, the modified starch improved the gluten index, contributing to better dough elasticity and stability, providing good structural support and resistance to deformation during mixing and baking. As expected, the control sample exhibited a higher amylase index, due to the presence of enzymes in wheat flour. However, this is of low concern in puff pastry dough, as amylase activity is more relevant in fermented doughs, which is not the case here. Overall, the use of modified starch in puff pastry enhanced product quality by improving texture, structure, and shelf life, particularly when used at levels between 30% and 40%. This research underscores the potential of modified starches to address health concerns associated with traditional starches and to contribute to the development of higher-quality, consumer-friendly baked products. Furthermore, the findings suggest that modified starches could play a pivotal role in future innovations within the baking industry, particularly in products aiming to balance healthfulness with sensory appeal. By incorporating modified starch into their formulations, bakeries can meet the growing demand for healthier, more sustainable products while maintaining the indulgent qualities that consumers expect from baked goods.Keywords: baking quality, dough properties, modified starch, puff pastry
Procedia PDF Downloads 22192 Cost Based Analysis of Risk Stratification Tool for Prediction and Management of High Risk Choledocholithiasis Patients
Authors: Shreya Saxena
Abstract:
Background: Choledocholithiasis is a common complication of gallstone disease. Risk scoring systems exist to guide the need for further imaging or endoscopy in managing choledocholithiasis. We completed an audit to review the American Society for Gastrointestinal Endoscopy (ASGE) scoring system for prediction and management of choledocholithiasis against the current practice at a tertiary hospital to assess its utility in resource optimisation. We have now conducted a cost focused sub-analysis on patients categorized high-risk for choledocholithiasis according to the guidelines to determine any associated cost benefits. Method: Data collection from our prior audit was used to retrospectively identify thirteen patients considered high-risk for choledocholithiasis. Their ongoing management was mapped against the guidelines. Individual costs for the key investigations were obtained from our hospital financial data. Total cost for the different management pathways identified in clinical practice were calculated and compared against predicted costs associated with recommendations in the guidelines. We excluded the cost of laparoscopic cholecystectomy and considered a set figure for per day hospital admission related expenses. Results: Based on our previous audit data, we identified a77% positive predictive value for the ASGE risk stratification tool to determine patients at high-risk of choledocholithiasis. 47% (6/13) had an magnetic resonance cholangiopancreatography (MRCP) prior to endoscopic retrograde cholangiopancreatography (ERCP), whilst 53% (7/13) went straight for ERCP. The average length of stay in the hospital was 7 days, with an additional day and cost of £328.00 (£117 for ERCP) for patients awaiting an MRCP prior to ERCP. Per day hospital admission was valued at £838.69. When calculating total cost, we assumed all patients had admission bloods and ultrasound done as the gold standard. In doing an MRCP prior to ERCP, there was a 130% increase in cost incurred (£580.04 vs £252.04) per patient. When also considering hospital admission and the average length of stay, it was an additional £1166.69 per patient. We then calculated the exact costs incurred by the department, over a three-month period, for all patients, for key investigations or procedures done in the management of choledocholithiasis. This was compared to an estimate cost derived from the recommended pathways in the ASGE guidelines. Overall, 81% (£2048.45) saving was associated with following the guidelines compared to clinical practice. Conclusion: MRCP is the most expensive test associated with the diagnosis and management of choledocholithiasis. The ASGE guidelines recommend endoscopy without an MRCP in patients stratified as high-risk for choledocholithiasis. Our audit that focused on assessing the utility of the ASGE risk scoring system showed it to be relatively reliable for identifying high-risk patients. Our cost analysis has shown significant cost savings per patient and when considering the average length of stay associated with direct endoscopy rather than an additional MRCP. Part of this is also because of an increased average length of stay associated with waiting for an MRCP. The above data supports the ASGE guidelines for the management of high-risk for choledocholithiasis patients from a cost perspective. The only caveat is our small data set that may impact the validity of our average length of hospital stay figures and hence total cost calculations.Keywords: cost-analysis, choledocholithiasis, risk stratification tool, general surgery
Procedia PDF Downloads 98191 Laying the Proto-Ontological Conditions for Floating Architecture as a Climate Adaptation Solution for Rising Sea Levels: Conceptual Framework and Definition of a Performance Based Design
Authors: L. Calcagni, A. Battisti, M. Hensel, D. S. Hensel
Abstract:
Since the beginning of the 21st century, we have seen a dynamic growth of water-based (WB) architecture, mainly due to the increasing threat of floods caused by sea level rise and heavy rains, all correlated with climate change. At the same time, the shortage of land available for urban development also led architects, engineers, and policymakers to reclaim the seabed or to build floating structures. Furthermore, the drive to produce energy from renewable resources has expanded the sector of offshore research, mining, and energy industry which seeks new types of WB structures. In light of these considerations, the time is ripe to consider floating architecture as a full-fledged building typology. Currently, there is no universally recognized academic definition of a floating building. Research on floating architecture lacks a proper, commonly shared vocabulary and typology distinction. Moreover, there is no global international legal framework for urban development on water, and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, first of all, the research intends to overcome the semantic and typological issues through the conceptualization of floating architecture, laying the proto-ontological conditions for floating development, and secondly to identify the parameters to be considered in the definition of a specific PBBD framework, setting the scene for national planning strategies. The theoretical overview and re-semanticization process involve the attribution of a new meaning to the term floating architecture. This terminological work of semantic redetermination is carried out through a systematic literature review and involves quantitative and historical research as well as logical argumentation methods. As it is expected that floating urban development is most likely to take place as an extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than to those of the offshore industry. Therefore, the identification and categorization of parameters –looking towards the potential formation of a PBBD framework for floating development– takes the urban and architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics (i.e. stability and buoyancy) from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of regulatory systems that are effective in different countries around the world, addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, inhabiting water is proposed not only as a viable response to the problem of rising sea levels, thus as a resilient frontier for urban development, but also as a response to energy insecurity, clean water, and food shortages, environmental concerns, and urbanization, in line with Blue Economy principles and the Agenda 2030. This review shows how floating architecture is to all intents and purposes, an urban adaptation measure and a solution towards self-sufficiency and energy-saving objectives. Moreover, the adopted methodology is, to all extents, open to further improvements and integrations, thus not rigid and already completely determined. Along with new designs and functions that will come into play in the practice field, eventually, life on water will seem no more unusual than life on land, especially by virtue of the multiple advantages it provides not only to users but also to the environment.Keywords: adaptation measures, building typology, floating architecture, performance based building design, rising sea levels
Procedia PDF Downloads 97190 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 195189 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach
Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca
Abstract:
The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.Keywords: cactus pear, post-harvest losses, profit margin, value-chain
Procedia PDF Downloads 130188 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States
Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss
Abstract:
Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.Keywords: Alzheimer’s disease, budget, dementia, diagnosis.
Procedia PDF Downloads 138187 Internet of Assets: A Blockchain-Inspired Academic Program
Authors: Benjamin Arazi
Abstract:
Blockchain is the technology behind cryptocurrencies like Bitcoin. It revolutionizes the meaning of trust in the sense of offering total reliability without relying on any central entity that controls or supervises the system. The Wall Street Journal states: “Blockchain Marks the Next Step in the Internet’s Evolution”. Blockchain was listed as #1 in Linkedin – The Learning Blog “most in-demand hard skills needed in 2020”. As stated there: “Blockchain’s novel way to store, validate, authorize, and move data across the internet has evolved to securely store and send any digital asset”. GSMA, a leading Telco organization of mobile communications operators, declared that “Blockchain has the potential to be for value what the Internet has been for information”. Motivated by these seminal observations, this paper presents the foundations of a Blockchain-based “Internet of Assets” academic program that joins under one roof leading application areas that are characterized by the transfer of assets over communication lines. Two such areas, which are pillars of our economy, are Fintech – Financial Technology and mobile communications services. The next application in line is Healthcare. These challenges are met based on available extensive professional literature. Blockchain-based assets communication is based on extending the principle of Bitcoin, starting with the basic question: If digital money that travels across the universe can ‘prove its own validity’, can this principle be applied to digital content. A groundbreaking positive answer here led to the concept of “smart contract” and consequently to DLT - Distributed Ledger Technology, where the word ‘distributed’ relates to the non-existence of reliable central entities or trusted third parties. The terms Blockchain and DLT are frequently used interchangeably in various application areas. The World Bank Group compiled comprehensive reports, analyzing the contribution of DLT/Blockchain to Fintech. The European Central Bank and Bank of Japan are engaged in Project Stella, “Balancing confidentiality and auditability in a distributed ledger environment”. 130 DLT/Blockchain focused Fintech startups are now operating in Switzerland. Blockchain impact on mobile communications services is treated in detail by leading organizations. The TM Forum is a global industry association in the telecom industry, with over 850 member companies, mainly mobile operators, that generate US$2 trillion in revenue and serve five billion customers across 180 countries. From their perspective: “Blockchain is considered one of the digital economy’s most disruptive technologies”. Samples of Blockchain contributions to Fintech (taken from a World Bank document): Decentralization and disintermediation; Greater transparency and easier auditability; Automation & programmability; Immutability & verifiability; Gains in speed and efficiency; Cost reductions; Enhanced cyber security resilience. Samples of Blockchain contributions to the Telco industry. Establishing identity verification; Record of transactions for easy cost settlement; Automatic triggering of roaming contract which enables near-instantaneous charging and reduction in roaming fraud; Decentralized roaming agreements; Settling accounts per costs incurred in accordance with agreement tariffs. This clearly demonstrates an academic education structure where fundamental technologies are studied in classes together with these two application areas. Advanced courses, treating specific implementations then follow separately. All are under the roof of “Internet of Assets”.Keywords: blockchain, education, financial technology, mobile telecommunications services
Procedia PDF Downloads 180186 Optimal Pressure Control and Burst Detection for Sustainable Water Management
Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana
Abstract:
Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring
Procedia PDF Downloads 87185 Biodegradation of Chlorophenol Derivatives Using Macroporous Material
Authors: Dmitriy Berillo, Areej K. A. Al-Jwaid, Jonathan L. Caplin, Andrew Cundy, Irina Savina
Abstract:
Chlorophenols (CPs) are used as a precursor in the production of higher CPs and dyestuffs, and as a preservative. Contamination by CPs of the ground water is located in the range from 0.15-100mg/L. The EU has set maximum concentration limits for pesticides and their degradation products of 0.1μg/L and 0.5μg/L, respectively. People working in industries which produce textiles, leather products, domestic preservatives, and petrochemicals are most heavily exposed to CPs. The International Agency for Research on Cancers categorized CPs as potential human carcinogens. Existing multistep water purification processes for CPs such as hydrogenation, ion exchange, liquid-liquid extraction, adsorption by activated carbon, forward and inverse osmosis, electrolysis, sonochemistry, UV irradiation, and chemical oxidation are not always cost effective and can cause the formation of even more toxic or mutagenic derivatives. Bioremediation of CPs derivatives utilizing microorganisms results in 60 to 100% decontamination efficiency and the process is more environmentally-friendly compared with existing physico-chemical methods. Microorganisms immobilized onto a substrate show many advantages over free bacteria systems, such as higher biomass density, higher metabolic activity, and resistance to toxic chemicals. They also enable continuous operation, avoiding the requirement for biomass-liquid separation. The immobilized bacteria can be reused several times, which opens the opportunity for developing cost-effective processes for wastewater treatment. In this study, we develop a bioremediation system for CPs based on macroporous materials, which can be efficiently used for wastewater treatment. Conditions for the preparation of the macroporous material from specific bacterial strains (Pseudomonas mendocina and Rhodococus koreensis) were optimized. The concentration of bacterial cells was kept constant; the difference was only the type of cross-linking agents used e.g. glutaraldehyde, novel polymers, which were utilized at concentrations of 0.5 to 1.5%. SEM images and rheology analysis of the material indicated a monolithic macroporous structure. Phenol was chosen as a model system to optimize the function of the cryogel material and to estimate its enzymatic activity, since it is relatively less toxic and harmful compared to CPs. Several types of macroporous systems comprising live bacteria were prepared. The viability of the cross-linked bacteria was checked using Live/Dead BacLight kit and Laser Scanning Confocal Microscopy, which revealed the presence of viable bacteria with the novel cross-linkers, whereas the control material cross-linked with glutaraldehyde(GA), contained mostly dead cells. The bioreactors based on bacteria were used for phenol degradation in batch mode at an initial concentration of 50mg/L, pH 7.5 and a temperature of 30°C. Bacterial strains cross-linked with GA showed insignificant ability to degrade phenol and for one week only, but a combination of cross-linking agents illustrated higher stability, viability and the possibility to be reused for at least five weeks. Furthermore, conditions for CPs degradation will be optimized, and the chlorophenol degradation rates will be compared to those for phenol. This is a cutting-edge bioremediation approach, which allows the purification of waste water from sustainable compounds without a separation step to remove free planktonic bacteria. Acknowledgments: Dr. Berillo D. A. is very grateful to Individual Fellowship Marie Curie Program for funding of the research.Keywords: bioremediation, cross-linking agents, cross-linked microbial cell, chlorophenol degradation
Procedia PDF Downloads 213184 Rationally Designed Dual PARP-HDAC Inhibitor Elicits Striking Anti-leukemic Effects
Authors: Amandeep Thakur, Yi-Hsuan Chu, Chun-Hsu Pan, Kunal Nepali
Abstract:
The transfer of ADP-ribose residues onto target substrates from nicotinamide adenine dinucleotide (NAD) (PARylation) is catalyzed by Poly (ADP-ribose) polymerases (PARPs). Amongst the PARP family members, the DNA damage response in cancer is majorly regulated by PARP1 and PARP2. The blockade of DNA repair by PARP inhibitors leads to the progression of DNA single-strand breaks (induced by some triggering factors) to double-strand breaks. Notably, PARP inhibitors are remarkably effective in cancers with defective homologous recombination repair (HRR). In particular, cancer cells with BRCA mutations are responsive to therapy with PARP inhibitors. The aforementioned requirement for PARP inhibitors to be effective confers a narrow activity spectrum to PARP inhibitors, which hinders their clinical applicability. Thus, the quest to expand the application horizons of PARP inhibitors beyond BRCA mutations is the need of the hour. Literature precedents reveal that HDAC inhibition induces BRCAness in cancer cells and can broaden the therapeutic scope of PARP inhibitors. Driven by such disclosures, dual inhibitors targeting both PARP and HDAC enzymes were designed by our research group to extend the efficacy of PARP inhibitors beyond BRCA-mutated cancers to cancers with induced BRCAness. The design strategy involved the installation of Veliparib, an investigational PARP inhibitor, as a surface recognition part in the HDAC inhibitor pharmacophore model. The chemical architecture of veliparib was deemed appropriate as a starting point for the generation of dual inhibitors by virtue of its size and structural flexibility. A validatory docking study was conducted at the outset to predict the binding mode of the designed dual modulatory chemical architectures. Subsequently, the designed chemical architectures were synthesized via a multistep synthetic route and evaluated for antitumor efficacy. Delightfully, one compound manifested impressive anti-leukemic effects (HL-60 cell lines) mediated via dual inhibition of PARP and class I HDACs. The outcome of the western blot analysis revealed that the compound could downregulate the expression levels of PARP1 and PARP2 and the HDAC isoforms (HDAC1, 2, and 3). Also, the dual PARP-HDAC inhibitor upregulated the protein expression of the acetyl histone H3, confirming its abrogation potential for class I HDACs. In addition, the dual modulator could arrest the cell cycle at the G0/G1 phase and induce autophagy. Further, polymer-based nanoformulation of the dual inhibitor was furnished to afford targeted delivery of the dual inhibitor at the cancer site. Transmission electron microscopy (TEM) results indicate that the nanoparticles were monodispersed and spherical. Moreover, the polymeric nanoformulation exhibited an appropriate particle size. Delightfully, pH-sensitive behavior was manifested by the polymeric nanoformulation that led to selective antitumor effects towards the HL-60 cell lines. In light of the magnificent anti-leukemic profile of the identified dual PARP-HDAC inhibitor, in-vivo studies (pharmacokinetics and pharmacodynamics) are currently being conducted. Notably, the optimistic findings of the aforementioned study have spurred our research group to initiate several medicinal chemistry campaigns to create bifunctional small molecule inhibitors addressing PARP as the primary target.Keywords: PARP inhibitors, HDAC inhibitors, BRCA mutations, leukemia
Procedia PDF Downloads 23183 Implementing Equitable Learning Experiences to Increase Environmental Awareness and Science Proficiency in Alabama’s Schools and Communities
Authors: Carly Cummings, Maria Soledad Peresin
Abstract:
Alabama has a long history of racial injustice and unsatisfactory educational performance. In the 1870s Jim Crow laws segregated public schools and disproportionally allocated funding and resources to white institutions across the South. Despite the Supreme Court ruling to integrate schools following Brown vs. the Board of Education in 1954, Alabama’s school system continued to exhibit signs of segregation, compounded by “white flight” and the establishment of exclusive private schools, which still exist today. This discriminatory history has had a lasting impact of the state’s education system, reflected in modern school demographics and achievement data. It is well known that Alabama struggles with education performance, especially in science education. On average, minority groups scored the lowest in science proficiency. In Alabama, minority populations are concentrated in a region known as the Black Belt, which was once home to countless slave plantations and was the epicenter of the Civil Rights Movement. Today the Black Belt is characterized by a high density of woodlands and plays a significant role in Alabama’s leading economic industry-forest products. Given the economic importance of forestry and agriculture to the state, environmental science proficiency is essential to its stability; however, it is neglected in areas where it is needed most. To better understand the inequity of science education within Alabama, our study first investigates how geographic location, demographics and school funding relate to science achievement scores using ArcGIS and Pearson’s correlation coefficient. Additionally, our study explores the implementation of a relevant, problem-based, active learning lesson in schools. Relevant learning engages students by connecting material to their personal experiences. Problem-based active learning involves real-world problem-solving through hands-on experiences. Given Alabama’s significant woodland coverage, educational materials on forest products were developed with consideration of its relevance to students, especially those located in the Black Belt. Furthermore, to incorporate problem solving and active learning, the lesson centered around students using forest products to solve environmental challenges, such as water pollution- an increasing challenge within the state due to climate change. Pre and post assessment surveys were provided to teachers to measure the effectiveness of the lesson. In addition to pedagogical practices, community and mentorship programs are known to positively impact educational achievements. To this end, our work examines the results of surveys measuring educational professionals’ attitudes toward a local mentorship group within the Black Belt and its potential to address environmental and science literacy. Additionally, our study presents survey results from participants who attended an educational community event, gauging its effectiveness in increasing environmental and science proficiency. Our results demonstrate positive improvements in environmental awareness and science literacy with relevant pedagogy, mentorship, and community involvement. Implementing these practices can help provide equitable and inclusive learning environments and can better equip students with the skills and knowledge needed to bridge this historic educational gap within Alabama.Keywords: equitable education, environmental science, environmental education, science education, racial injustice, sustainability, rural education
Procedia PDF Downloads 68182 In-situ Mental Health Simulation with Airline Pilot Observation of Human Factors
Authors: Mumtaz Mooncey, Alexander Jolly, Megan Fisher, Kerry Robinson, Robert Lloyd, Dave Fielding
Abstract:
Introduction: The integration of the WingFactors in-situ simulation programme has transformed the education landscape at the Whittington Health NHS Trust. To date, there have been a total of 90 simulations - 19 aimed at Paediatric trainees, including 2 Child and Adolescent Mental Health (CAMHS) scenarios. The opportunity for joint debriefs provided by clinical faculty and airline pilots, has created a new exciting avenue to explore human factors within psychiatry. Through the use of real clinical environments and primed actors; the benefits of high fidelity simulation, interdisciplinary and interprofessional learning has been highlighted. The use of in-situ simulation within Psychiatry is a newly emerging concept and its success here has been recognised by unanimously positive feedback from participants and acknowledgement through nomination for the Health Service Journal (HSJ) Award (Best Education Programme 2021). Methodology: The first CAMHS simulation featured a collapsed patient in the toilet with a ligature tied around her neck, accompanied by a distressed parent. This required participants to consider:; emergency physical management of the case, alongside helping to contain the mother and maintaining situational awareness when transferring the patient to an appropriate clinical area. The second simulation was based on a 17- year- old girl attempting to leave the ward after presenting with an overdose, posing potential risk to herself. The safe learning environment enabled participants to explore techniques to engage the young person and understand their concerns, and consider the involvement of other members of the multidisciplinary team. The scenarios were followed by an immediate ‘hot’ debrief, combining technical feedback with Human Factors feedback from uniformed airline pilots and clinicians. The importance of psychological safety was paramount, encouraging open and honest contributions from all participants. Key learning points were summarized into written documents and circulated. Findings: The in-situ simulations demonstrated the need for practical changes both in the Emergency Department and on the Paediatric ward. The presence of airline pilots provided a novel way to debrief on Human Factors. The following key themes were identified: -Team-briefing (‘Golden 5 minutes’) - Taking a few moments to establish experience, initial roles and strategies amongst the team can reduce the need for conversations in front of a distressed patient or anxious relative. -Use of checklists / guidelines - Principles associated with checklist usage (control of pace, rigor, team situational awareness), instead of reliance on accurate memory recall when under pressure. -Read-back - Immediate repetition of safety critical instructions (e.g. drug / dosage) to mitigate the risks associated with miscommunication. -Distraction management - Balancing the risk of losing a team member to manage a distressed relative, versus it impacting on the care of the young person. -Task allocation - The value of the implementation of ‘The 5A’s’ (Availability, Address, Allocate, Ask, Advise), for effective task allocation. Conclusion: 100% of participants have requested more simulation training. Involvement of airline pilots has led to a shift in hospital culture, bringing to the forefront the value of Human Factors focused training and multidisciplinary simulation. This has been of significant value in not only physical health, but also mental health simulation.Keywords: human factors, in-situ simulation, inter-professional, multidisciplinary
Procedia PDF Downloads 107181 Finite Element Method (FEM) Simulation, design and 3D Print of Novel Highly Integrated PV-TEG Device with Improved Solar Energy Harvest Efficiency
Abstract:
Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.Keywords: thermoelectric, finite element method, 3d print, energy conversion
Procedia PDF Downloads 67180 'Go Baby Go'; Community-Based Integrated Early Childhood and Maternal Child Health Model Improving Early Childhood Stimulation, Care Practices and Developmental Outcomes in Armenia: A Quasi-Experimental Study
Authors: Viktorya Sargsyan, Arax Hovhannesyan, Karine Abelyan
Abstract:
Introduction: During the last decade, scientific studies have proven the importance of Early Childhood Development (ECD) interventions. These interventions are shown to create strong foundations for children’s intellectual, emotional and physical well-being, as well as the impact they have on learning and economic outcomes for children as they mature into adulthood. Many children in rural Armenia fail to reach their full development potential due to lack of early brain stimulation (playing, singing, reading, etc.) from their parents, and lack of community tools and services to follow-up children’s neurocognitive development. This is exacerbated by high rates of stunting and anemia among children under 3(CU3). This research study tested the effectiveness of an integrated ECD and Maternal, Newborn and Childhood Health (MNCH) model, called “Go Baby, Go!” (GBG), against the traditional (MNCH) strategy which focuses solely on preventive health and nutrition interventions. The hypothesis of this quasi-experimental study was: Children exposed to GBG will have better neurocognitive and nutrition outcomes compared to those receiving only the MNCH intervention. The secondary objective was to assess the effect of GBG on parental child care and nutrition practices. Methodology: The 14 month long study, targeted all 1,300 children aged 0 to 23 months, living in 43 study communities the in Gavar and Vardenis regions (Gegharkunik province, Armenia). Twenty-three intervention communities, 680 children, received GBG, and 20 control communities, 630 children, received MCHN interventions only. Baseline and evaluation data on child development, nutrition status and parental child care and nutrition practices were collected (caregiver interview, direct child assessment). In the intervention sites, in addition to MNCH (maternity schools, supportive supervision for Health Care Providers (HCP), the trained GBG facilitators conducted six interactive group sessions for mothers (key messages, information, group discussions, role playing, video-watching, toys/books preparation, according to GBG curriculum), and two sessions (condensed GBG) for adult family members (husbands, grandmothers). The trained HCPs received quality supervision for ECD counseling and screening. Findings: The GBG model proved to be effective in improving ECD outcomes. Children in the intervention sites had 83% higher odd of total ECD composite score (cognitive, language, motor) compared to children in the control sites (aOR 1.83; 95 percent CI: 1.08-3.09; p=0.025). Caregivers also demonstrated better child care and nutrition practices (minimum dietary diversity in intervention site is 55 percent higher compared to control (aOR=1.55, 95 percent CI 1.10-2.19, p =0.013); support for learning and disciplining practices (aOR=2.22, 95 percent CI 1.19-4.16, p=0.012)). However, there was no evidence of stunting reduction in either study arm. he effect of the integrated model was more prominent in Vardenis, a community which is characterised by high food insecurity and limited knowledge of positive parenting skills. Conclusion: The GBG model is effective and could be applied in target areas with the greatest economic disadvantages and parenting challenges to improve ECD, care practices and developmental outcomes. Longitudinal studies are needed to view the long-term effects of GBG on learning and school readiness.Keywords: early childhood development, integrated interventions, parental practices, quasi-experimental study
Procedia PDF Downloads 172179 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia
Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger
Abstract:
Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia
Procedia PDF Downloads 74178 Successful Public-Private Partnership Through the Impact of Environmental Education: A Case Study on Transforming Community Confrict into Harmony in the Dongpian Community
Authors: Men An Pan, Ho Hsiung Huang, Jui Chuan Lin, Tsui Hsun Wu, Hsing Yuan Yen
Abstract:
Pingtung County, located in the southernmost region of Taiwan, has the largest number of pig farms in the country. In the past, livestock operators in Dongpian Village discharged their wastewater into the nearby water bodies, causing water pollution in the local rivers and polluting the air with the stench of the pig excrement. These resulted in many complaints from the local residents. In response to a long time fighting back of the community against the livestock farms due to the confrict, the County Government's Environmental Protection Bureau (PTEPB) examined potential wayouts in addition to heavy fines to the perpetrators. Through helping the livestock farms to upgrade their pollution prevention equipment, promoting the reuse of biogas residue and slurry from the pig excrement, and environmental education, the confrict was successfully resolved. The properly treated wastewater from the livestock farms has been freely provided to the neighboring farmlands via pipelines and tankers. Thus, extensive cultivation of bananas, papaya, red dragon fruit, Inca nut, and cocoa has resulted in 34% resource utilization of biogas residue as a fertilizer. This has encouraged farmers to reduce chemical fertilizers and use microbial materials like photosynthetic bacteria after banning herbicides while lowering the cost of wastewater treatment in livestock farms and alleviating environmental pollution simultaneously. That is, the livestock farms fully demonstrate the determination to fulfill their corporate social responsibility (CSR). Due to the success, Eight farms jointly established a social enterprise - "Dongpian Gemstone Village Co., Ltd." to promote organic farming through a "shared farm." The company appropriates 5% of its total revenue back to the community through caregiving services for the elderly and a fund for young local farmers. The community adopted the Satoyama Initiative in accordance with the Conference of the CBD COP10. Through the positive impact of environmental education, the community seeks to realize the coexistence between society and nature while maintaining and developing socio-economic activities (including agriculture) with respect for nature and building a harmonic relationship between humans and nature. By way of sustainable management of resources and ensuring biodiversity, the community is transforming into a socio-ecological production landscape. Apart from nature conservation and watercourse ecology, preserving local culture is also a key focus of the environmental education. To mitigate the impact of global warming and climate change, the community and the government have worked together to develop a disaster prevention and relief system, strive to establish a low-carbon emitting homeland, and become a model for resilient communities. By the power of environmental education, this community has turned its residents’ hearts and minds into concrete action, fulfilled social responsibility, and moved towards realizing the UN SDGs. Even though it is not the only community to integrate government agencies, research institutions, and NGOs for environmental education, it is a prime example of a low-carbon sustainable community that achieves more than 9 SDGs, including responsible consumption and production, climate change action, and diverse partnerships. The community is also leveraging environmental education to become a net-zero carbon community targeted by COP26.Keywords: environmental education, biogas residue, biogas slurry, CSR, SDGs, climate change, net-zero carbon emissions
Procedia PDF Downloads 143177 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 78176 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis
Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi
Abstract:
Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.Keywords: Gait analysis, kinematic, motor impairment, inherent feature
Procedia PDF Downloads 355175 The Role of a Specialized Diet for Management of Fibromyalgia Symptoms: A Systematic Review
Authors: Siddhant Yadav, Rylea Ranum, Hannah Alberts, Abdul Kalaiger, Brent Bauer, Ryan Hurt, Ann Vincent, Loren Toussaint, Sanjeev Nanda
Abstract:
Background and significance: Fibromyalgia (FM) is a chronic pain disorder also characterized by chronic fatigue, morning stiffness, sleep, and cognitive symptoms, psychological disturbances (anxiety, depression), and is comorbid with multiple medical and psychiatric conditions. It has an incidence of 2-4% in the general population and is reported more commonly in women. Oxidative stress and inflammation are thought to contribute to pain in patients with FM, and the adoption of an antioxidant/anti-inflammatory diet has been suggested as a modality to alleviate symptoms. The aim of this systematic review was to evaluate the efficacy of specialized diets (ketogenic, gluten free, Mediterranean, and low carbohydrate) in improving FM symptoms. Methodology: A comprehensive search of the following databases from inception to July 15th, 2021, was conducted: Ovid MEDLINE and Epub ahead of print, in-process and other non-indexed citations and daily, Ovid Embase, Ovid EBM reviews, Cochrane central register of controlled trials, EBSCO host CINAHL with full text, Elsevier Scopus, website and citation index, web of science emerging sources citation and clinicaltrials.gov. We included randomized controlled trials, non-randomized experimental studies, cross-sectional studies, cohort studies, case series, and case reports in adults with fibromyalgia. The risk of bias was assessed with the Agency for Health Care Research and Quality designed, specific recommended criteria (AHRQ). Results: Thirteen studies were eligible for inclusion. This included a total of 761 participants. Twelve out of the 13 studies reported improvement in widespread body pain, joint stiffness, sleeping pattern, mood, and gastrointestinal symptoms, and one study reported no changes in symptomatology in patients with FM on specialized diets. None of the studies showed the worsening of symptoms associated with a specific diet. Most of the patient population was female, with the mean age at which fibromyalgia was diagnosed being 48.12 years. Improvement in symptoms was reported by the patient's adhering to a gluten-free diet, raw vegan diet, tryptophan- and magnesium-enriched Mediterranean diet, aspartame- and msg- elimination diet, and specifically a Khorasan wheat diet. Risk of bias assessment noted that 6 studies had a low risk of bias (5 clinical trials and 1 case series), four studies had a moderate risk of bias, and 3 had a high risk of bias. In many of the studies, the allocation of treatment (diets) was not adequately concealed, and the researchers did not rule out any potential impact from a concurrent intervention or an unintended exposure that might have biased the results. On the other hand, there was a low risk of attrition bias in all the trials; all were conducted with an intention-to-treat, and the inclusion/exclusion criteria, exposures/interventions, and primary outcomes were valid, reliable, and implemented consistently across all study participants. Concluding statement: Patients with fibromyalgia who followed specialized diets experienced a variable degree of improvement in their widespread body pain. Improvement was also seen in stiffness, fatigue, moods, sleeping patterns, and gastrointestinal symptoms. Additionally, the majority of the patients also reported improvement in overall quality of life.Keywords: fibromyalgia, specialized diet, vegan, gluten free, Mediterranean, systematic review
Procedia PDF Downloads 73174 Development of Portable Hybrid Renewable Energy System for Sustainable Electricity Supply to Rural Communities in Nigeria
Authors: Abdulkarim Nasir, Alhassan T. Yahaya, Hauwa T. Abdulkarim, Abdussalam El-Suleiman, Yakubu K. Abubakar
Abstract:
The need for sustainable and reliable electricity supply in rural communities of Nigeria remains a pressing issue, given the country's vast energy deficit and the significant number of inhabitants lacking access to electricity. This research focuses on the development of a portable hybrid renewable energy system designed to provide a sustainable and efficient electricity supply to these underserved regions. The proposed system integrates multiple renewable energy sources, specifically solar and wind, to harness the abundant natural resources available in Nigeria. The design and development process involves the selection and optimization of components such as photovoltaic panels, wind turbines, energy storage units (batteries), and power management systems. These components are chosen based on their suitability for rural environments, cost-effectiveness, and ease of maintenance. The hybrid system is designed to be portable, allowing for easy transportation and deployment in remote locations with limited infrastructure. Key to the system's effectiveness is its hybrid nature, which ensures continuous power supply by compensating for the intermittent nature of individual renewable sources. Solar energy is harnessed during the day, while wind energy is captured whenever wind conditions are favourable, thus ensuring a more stable and reliable energy output. Energy storage units are critical in this setup, storing excess energy generated during peak production times and supplying power during periods of low renewable generation. These studies include assessing the solar irradiance, wind speed patterns, and energy consumption needs of rural communities. The simulation results inform the optimization of the system's design to maximize energy efficiency and reliability. This paper presents the development and evaluation of a 4 kW standalone hybrid system combining wind and solar power. The portable device measures approximately 8 feet 5 inches in width, 8 inches 4 inches in depth, and around 38 feet in height. It includes four solar panels with a capacity of 120 watts each, a 1.5 kW wind turbine, a solar charge controller, remote power storage, batteries, and battery control mechanisms. Designed to operate independently of the grid, this hybrid device offers versatility for use in highways and various other applications. It also presents a summary and characterization of the device, along with photovoltaic data collected in Nigeria during the month of April. The construction plan for the hybrid energy tower is outlined, which involves combining a vertical-axis wind turbine with solar panels to harness both wind and solar energy. Positioned between the roadway divider and automobiles, the tower takes advantage of the air velocity generated by passing vehicles. The solar panels are strategically mounted to deflect air toward the turbine while generating energy. Generators and gear systems attached to the turbine shaft enable power generation, offering a portable solution to energy challenges in Nigerian communities. The study also addresses the economic feasibility of the system, considering the initial investment costs, maintenance, and potential savings from reduced fossil fuel use. A comparative analysis with traditional energy supply methods highlights the long-term benefits and sustainability of the hybrid system.Keywords: renewable energy, solar panel, wind turbine, hybrid system, generator
Procedia PDF Downloads 41173 Pharmacophore-Based Modeling of a Series of Human Glutaminyl Cyclase Inhibitors to Identify Lead Molecules by Virtual Screening, Molecular Docking and Molecular Dynamics Simulation Study
Authors: Ankur Chaudhuri, Sibani Sen Chakraborty
Abstract:
In human, glutaminyl cyclase activity is highly abundant in neuronal and secretory tissues and is preferentially restricted to hypothalamus and pituitary. The N-terminal modification of β-amyloids (Aβs) peptides by the generation of a pyro-glutamyl (pGlu) modified Aβs (pE-Aβs) is an important process in the initiation of the formation of neurotoxic plaques in Alzheimer’s disease (AD). This process is catalyzed by glutaminyl cyclase (QC). The expression of QC is characteristically up-regulated in the early stage of AD, and the hallmark of the inhibition of QC is the prevention of the formation of pE-Aβs and plaques. A computer-aided drug design (CADD) process was employed to give an idea for the designing of potentially active compounds to understand the inhibitory potency against human glutaminyl cyclase (QC). This work elaborates the ligand-based and structure-based pharmacophore exploration of glutaminyl cyclase (QC) by using the known inhibitors. Three dimensional (3D) quantitative structure-activity relationship (QSAR) methods were applied to 154 compounds with known IC50 values. All the inhibitors were divided into two sets, training-set, and test-sets. Generally, training-set was used to build the quantitative pharmacophore model based on the principle of structural diversity, whereas the test-set was employed to evaluate the predictive ability of the pharmacophore hypotheses. A chemical feature-based pharmacophore model was generated from the known 92 training-set compounds by HypoGen module implemented in Discovery Studio 2017 R2 software package. The best hypothesis was selected (Hypo1) based upon the highest correlation coefficient (0.8906), lowest total cost (463.72), and the lowest root mean square deviation (2.24Å) values. The highest correlation coefficient value indicates greater predictive activity of the hypothesis, whereas the lower root mean square deviation signifies a small deviation of experimental activity from the predicted one. The best pharmacophore model (Hypo1) of the candidate inhibitors predicted comprised four features: two hydrogen bond acceptor, one hydrogen bond donor, and one hydrophobic feature. The Hypo1 was validated by several parameters such as test set activity prediction, cost analysis, Fischer's randomization test, leave-one-out method, and heat map of ligand profiler. The predicted features were then used for virtual screening of potential compounds from NCI, ASINEX, Maybridge and Chembridge databases. More than seven million compounds were used for this purpose. The hit compounds were filtered by drug-likeness and pharmacokinetics properties. The selective hits were docked to the high-resolution three-dimensional structure of the target protein glutaminyl cyclase (PDB ID: 2AFU/2AFW) to filter these hits further. To validate the molecular docking results, the most active compound from the dataset was selected as a reference molecule. From the density functional theory (DFT) study, ten molecules were selected based on their highest HOMO (highest occupied molecular orbitals) energy and the lowest bandgap values. Molecular dynamics simulations with explicit solvation systems of the final ten hit compounds revealed that a large number of non-covalent interactions were formed with the binding site of the human glutaminyl cyclase. It was suggested that the hit compounds reported in this study could help in future designing of potent inhibitors as leads against human glutaminyl cyclase.Keywords: glutaminyl cyclase, hit lead, pharmacophore model, simulation
Procedia PDF Downloads 131172 An Innovation Decision Process View in an Adoption of Total Laboratory Automation
Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu
Abstract:
With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.Keywords: innovation decision process, total laboratory automation, health care
Procedia PDF Downloads 419171 Significant Aspects and Drivers of Germany and Australia's Energy Policy from a Political Economy Perspective
Authors: Sarah Niklas, Lynne Chester, Mark Diesendorf
Abstract:
Geopolitical tensions, climate change and recent movements favouring a transformative shift in institutional power structures have influenced the economics of conventional energy supply for decades. This study takes a multi-dimensional approach to illustrate the potential of renewable energy (RE) technology to provide a pathway to a low-carbon economy driven by ecologically sustainable, independent and socially just energy. This comparative analysis identifies economic, political and social drivers that shaped the adoption of RE policy in two significantly different economies, Germany and Australia, with strong and weak commitments to RE respectively. Two complementary political-economy theories frame the document-based analysis. Régulation Theory, inspired by Marxist ideas and strongly influenced by contemporary economic problems, provides the background to explore the social relationships contributing the adoption of RE within the macro-economy. Varieties of Capitalism theory, a more recently developed micro-economic approach, examines the nature of state-firm relationships. Together these approaches provide a comprehensive lens of analysis. Germany’s energy policy transformed substantially over the second half of the last century. The development is characterised by the coordination of societal, environmental and industrial demands throughout the advancement of capitalist regimes. In the Fordist regime, mass production based on coal drove Germany’s astounding economic recovery during the post-war period. Economic depression and the instability of institutional arrangements necessitated the impulsive seeking of national security and energy independence. During the postwar Flexi-Fordist period, quality-based production, innovation and technology-based competition schemes, particularly with regard to political power structures in and across Europe, favoured the adoption of RE. Innovation, knowledge and education were institutionalized, leading to the legislation of environmental concerns. Lastly the establishment of government-industry-based coordinative programs supported the phase out of nuclear power and the increased adoption of RE during the last decade. Australia’s energy policy is shaped by the country’s richness in mineral resources. Energy policy largely served coal mining, historically and currently one of the most capital-intense industry. Assisted by the macro-economic dimensions of institutional arrangements, social and financial capital is orientated towards the export-led and strongly demand-oriented economy. Here energy policy serves the maintenance of capital accumulation in the mining sector and the emerging Asian economies. The adoption of supportive renewable energy policy would challenge the distinct role of the mining industry within the (neo)-liberal market economy. The state’s protective role of the mining sector has resulted in weak commitment to RE policy and investment uncertainty in the energy sector. Recent developments, driven by strong public support for RE, emphasize the sense of community in urban and rural areas and the emergence of a bottom-up approach to adopt renewables. Thus, political economy frameworks on both the macro-economic (Regulation Theory) and micro-economic (Varieties of Capitalism theory) scales can together explain the strong commitment to RE in Germany vis-à-vis the weak commitment in Australia.Keywords: political economy, regulation theory, renewable energy, social relationships, energy transitions
Procedia PDF Downloads 381170 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 34169 Sinhala Sign Language to Grammatically Correct Sentences using NLP
Authors: Anjalika Fernando, Banuka Athuraliya
Abstract:
This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired communityKeywords: Sinhala sign language, sign Language, NLP, LSTM, NMT
Procedia PDF Downloads 104168 Triple Immunotherapy to Overcome Immune Evasion by Tumors in a Melanoma Mouse Model
Authors: Mary-Ann N. Jallad, Dalal F. Jaber, Alexander M. Abdelnoor
Abstract:
Introduction: Current evidence confirms that both innate and adaptive immune systems are capable of recognizing and abolishing malignant cells. The emergence of cancerous tumors in patients is, therefore, an indication that certain cancer cells can resist elimination by the immune system through a process known as “immune evasion”. In fact, cancer cells often exploit regulatory mechanisms to escape immunity. Such mechanisms normally exist to control the immune responses and prohibit exaggerated or autoimmune reactions. Recently, immunotherapies have shown promising yet limited results. Therefore this study investigates several immunotherapeutic combinations and devises a triple immunotherapy which harnesses the innate and acquired immune responses towards the annihilation of malignant cells through overcoming their ability of immune evasion, consequently hampering malignant progression and eliminating established tumors. The aims of the study are to rule out acute/chronic toxic effects of the proposed treatment combinations, to assess the effect of these combinations on tumor growth and survival rates, and to investigate potential mechanisms underlying the phenotypic results through analyzing serum levels of anti-tumor cytokines, angiogenic factors and tumor progression indicator, and the tumor-infiltrating immune-cells populations. Methodology: For toxicity analysis, cancer-free C57BL/6 mice are randomized into 9 groups: Group 1 untreated, group 2 treated with sterile saline (solvent of used treatments), group 3 treated with Monophosphoryl-lipid-A, group 4 with anti-CTLA4-antibodies, group 5 with 1-Methyl-Tryptophan (Indolamine-Dioxygenase-1 inhibitor), group 6 with both MPLA and anti-CTLA4-antibodies, group 7 with both MPLA and 1-MT, group 8 with both anti-CTLA4-antibodies and 1-MT, and group 9 with all three: MPLA, anti-CTLA4-antibodies and 1-MT. Mice are monitored throughout the treatment period and for three following months. At that point, histological sections from their main organs are assessed. For tumor progression and survival analysis, a murine melanoma model is generated by injecting analogous mice with B16F10 melanoma cells. These mice are segregated into the listed nine groups. Their tumor size and survival are monitored. For a depiction of underlying mechanisms, melanoma-bearing mice from each group are sacrificed at several time-points. Sera are tested to assess the levels of Interleukin-12 (IL-12), Vascular-Endothelial-Growth Factor (VEGF), and S100B. Furthermore, tumors are excised for analysis of infiltrated immune cell populations including T-cells, macrophages, natural killer cells and immune-regulatory cells. Results: Toxicity analysis shows that all treated groups present no signs of neither acute nor chronic toxicity. Their appearance and weights were comparable to those of control groups throughout the treatment period and for the following 3 months. Moreover, histological sections from their hearts, kidneys, lungs, and livers were normal. Work is ongoing for completion of the remaining study aims. Conclusion: Toxicity was the major concern for the success of the proposed comprehensive combinational therapy. Data generated so far ruled out any acute or chronic toxic effects. Consequently, ongoing work is quite promising and may significantly contribute to the development of more effective immunotherapeutic strategies for the treatment of cancer patients.Keywords: cancer immunotherapy, check-point blockade, combination therapy, melanoma
Procedia PDF Downloads 122