Search results for: National Australian Built Environment Rating System
1582 Assessment of the Masticatory Muscle Function in Young Adults Following SARS-CoV-2 Infection
Authors: Mimoza Canga, Edit Xhajanka, Irene Malagnino
Abstract:
The COVID-19 pandemic has had a significant influence on the lives of millions of people and is a threat to public health. SARS-CoV-2 infection has been associated with a number of health problems, including damage to the lungs and central nervous system damage. Additionally, it can also cause oral health problems, such as pain and weakening of the chewing muscles. The purpose of the study is the assessment of the masticatory muscle function in young adults between 18 and 29 years old following SARS-CoV-2 infection. Materials and methods: This study is quantitative cross-sectional research conducted in Albania between March 2023 and September 2023. Our research involved a total of 104 students who participated in our research, of which 64 were female (61.5%) and 40 were male (38.5%). They were divided into four age groups: 18-20, 21-23, 24-26, and 27-29 years old. In this study, the students willingly consented to take part in this study and were guaranteed that their participation would remain anonymous. The study recorded no dropouts, and it was carried out in compliance with the Declaration of Helsinki. Statistical analysis was conducted using IBM SPSS Statistics Version 23.0 on Microsoft Windows Linux, Chicago, IL, USA. Data were evaluated utilizing analysis of variance (ANOVA), with a significance level set at P ≤ 0.05. Results: 80 (76.9%) of the participants who had passed COVID-19 reported chronic masticatory muscle pain (P < 0.0001) and masticatory muscle spasms (P = 0.002). According to data analysis, 70 (67.3%) of the participants had a sore throat (P=0.007). 74% of the students reported experiencing weakness in their chewing muscles (P=0.003). The participants reported having undergone the following treatments: azithromycin (500 mg daily), prednisolone sodium phosphate (15 mg/5 mL daily), Augmentin tablets (625 mg), vitamin C (1000 mg), magnesium sulfate (4 g/100 mL), oral vitamin D3 supplementation of 5000 IU daily, ibuprofen (400 mg every 6 hours), and tizanidine (2 mg every 6 hours). Conclusion: This study, conducted in Albania, has limitations, but it can be concluded that COVID-19 directly affects the functioning of the masticatory muscles.Keywords: Albania, chronic pain, COVID-19, cross-sectional study, masticatory muscles, spasm
Procedia PDF Downloads 271581 Development of Composition and Technology of Vincristine Nanoparticles Using High-Molecular Carbohydrates of Plant Origin
Authors: L. Ebralidze, A. Tsertsvadze, D. Berashvili, A. Bakuridze
Abstract:
Current cancer therapy strategies are based on surgery, radiotherapy and chemotherapy. The problems associated with chemotherapy are one of the biggest challenges for clinical medicine. These include: low specificity, broad spectrum of side effects, toxicity and development of cellular resistance. Therefore, anti-cance drugs need to be develop urgently. Particularly, in order to increase efficiency of anti-cancer drugs and reduce their side effects, scientists work on formulation of nano-drugs. The objective of this study was to develop composition and technology of vincristine nanoparticles using high-molecular carbohydrates of plant origin. Plant polysacharides, particularly, soy bean seed polysaccharides, flaxseed polysaccharides, citrus pectin, gum arabic, sodium alginate were used as objects. Based on biopharmaceutical research, vincristine containing nanoparticle formulations were prepared. High-energy emulsification and solvent evaporation methods were used for preparation of nanosystems. Polysorbat 80, polysorbat 60, sodium dodecyl sulfate, glycerol, polyvinyl alcohol were used in formulation as emulsifying agent and stabilizer of the system. The ratio of API and polysacharides, also the type of the stabilizing and emulsifying agents are very effective on the particle size of the final product. The influence of preparation technology, type and concentration of stabilizing agents on the properties of nanoparticles were evaluated. For the next stage of research, nanosystems were characterized. Physiochemical characterization of nanoparticles: their size, shape, distribution was performed using Atomic force microscope and Scanning electron microscope. The present study explored the possibility of production of NPs using plant polysaccharides. Optimal ratio of active pharmaceutical ingredient and plant polysacharids, the best stabilizer and emulsifying agent was determined. The average range of nanoparticles size and shape was visualized by SEM.Keywords: nanoparticles, target delivery, natural high molecule carbohydrates, surfactants
Procedia PDF Downloads 2701580 Synchronous Reference Frame and Instantaneous P-Q Theory Based Control of Unified Power Quality Conditioner for Power Quality Improvement of Distribution System
Authors: Ambachew Simreteab Gebremedhn
Abstract:
Context: The paper explores the use of synchronous reference frame theory (SRFT) and instantaneous reactive power theory (IRPT) based control of Unified Power Quality Conditioner (UPQC) for improving power quality in distribution systems. Research Aim: To investigate the performance of different control configurations of UPQC using SRFT and IRPT for mitigating power quality issues in distribution systems. Methodology: The study compares three control techniques (SRFT-IRPT, SRFT-SRFT, IRPT-IRPT) implemented in series and shunt active filters of UPQC. Data is collected under various control algorithms to analyze UPQC performance. Findings: Results indicate the effectiveness of SRFT and IRPT based control techniques in addressing power quality problems such as voltage sags, swells, unbalance, harmonics, and current harmonics in distribution systems. Theoretical Importance: The study provides insights into the application of SRFT and IRPT in improving power quality, specifically in mitigating unbalanced voltage sags, where conventional methods fall short. Data Collection: Data is collected under various control algorithms using simulation in MATLAB Simulink and real-time operation executed with experimental results obtained using RT-LAB. Analysis Procedures: Performance analysis of UPQC under different control algorithms is conducted to evaluate the effectiveness of SRFT and IRPT based control techniques in mitigating power quality issues. Questions Addressed: How do SRFT and IRPT based control techniques compare in improving power quality in distribution systems? What is the impact of using different control configurations on the performance of UPQC? Conclusion: The study demonstrates the efficacy of SRFT and IRPT based control of UPQC in mitigating power quality issues in distribution systems, highlighting their potential for enhancing voltage and current quality.Keywords: power quality, UPQC, shunt active filter, series active filter, non-linear load, RT-LAB, MATLAB
Procedia PDF Downloads 91579 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET
Authors: Tyler T. Procko, Steve Collins
Abstract:
New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.Keywords: API data access, database, JSON, .NET core, SQL server
Procedia PDF Downloads 661578 Identifying the Factors that Influence Water-Use Efficiency in Agriculture: Case Study in a Spanish Semi-Arid Region
Authors: Laura Piedra-Muñoz, Ángeles Godoy-Durán, Emilio Galdeano-Gómez, Juan C. Pérez-Mesa
Abstract:
The current agricultural system in some arid and semi-arid areas is not sustainable in the long term. In southeast Spain, groundwater is the main water source and is overexploited, while alternatives like desalination are still limited. The Water Plan for the Mediterranean Basins 2015-2020 indicates a global deficit of 73.42 hm3 and an overexploitation of the aquifers of 205.58hm3. In order to solve this serious problem, two major actions can be taken: increasing available water, and/or improving the efficiency of its use. This study focuses on the latter. The main aim of this study is to present the major factors related to water usage efficiency in farming. It focuses on Almería province, southeast Spain, one of the most arid areas of the country, and in particular on family farms as the main direct managers of water use in this zone. Many of these farms are among the most water efficient in Spanish agriculture, but this efficiency is not generalized throughout the sector. This work conducts a comprehensive assessment of water performance in this area, using on-farm water-use, structural, socio-economic and environmental information. Two statistical techniques are used: descriptive analysis and cluster analysis. Thus, two groups are identified: the least and the most efficient farms regarding water usage. By analyzing both the common characteristics within each group and the differences between the groups with a one-way ANOVA analysis, several conclusions can be reached. The main differences between the two clusters center on the extent to which innovation and new technologies are used in irrigation. The most water efficient farms are characterized by more educated farmers, a greater degree of innovation, new irrigation technology, specialized production and awareness of water issues and environmental sustainability. The research shows that better practices and policies can have a substantial impact on achieving a more sustainable and efficient use of water. The findings of this study can be extended to farms in similar arid and semi-arid areas and contribute to foster appropriate policies to improve the efficiency of water usage in the agricultural sector.Keywords: cluster analysis, family farms, Spain, water-use efficiency
Procedia PDF Downloads 2881577 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 2111576 Juxtaposing Constitutionalism and Democratic Process in Nigeria Vis a Vis the South African Perspective
Authors: Onyinyechi Lilian Uche
Abstract:
Limiting arbitrariness and political power in governance is expressed in the concept of constitutionalism. Constitutionalism acknowledges the necessity for government but insists upon a limitation being placed upon its powers. It is therefore clear that the essence of constitutionalism is obviation of arbitrariness in governance and maximisation of liberty with adequate and expedient restraint on government. The doctrine of separation of powers accompanied by a system of checks and balances in Nigeria like many other African countries is marked by elements of ‘personal government’ and this has raised questions about whether the apparent separation of powers provided for in the Nigerian Constitution is not just a euphemism for the hegemony of the executive over the other two arms of government; the legislature and the judiciary. Another question raised in the article is whether the doctrine is merely an abstract philosophical inheritance that lacks both content and relevance to the realities of the country and region today? The current happenings in Nigeria and most African countries such as the flagrant disregard of court orders by the Executive, indicate clearly that the concept constitutionalism ordinarily goes beyond mere form and strikes at the substance of a constitution. It, therefore, involves a consideration of whether there are provisions in the constitution which limit arbitrariness in the exercise of political powers by providing checks and balances upon such exercise. These questions underscore the need for Africa to craft its own understanding of the separation of powers between the arms of government in furtherance of good governance as it has been seen that it is possible to have a constitution in place which may just be a mere statement of unenforceable ‘rights’ or may be bereft of provisions guaranteeing liberty or adequate and necessary restraint on exercise of government. This paper seeks to expatiate on the importance of the nexus between constitutionalism and democratic process and a juxtaposition of practices between Nigeria and South Africa. The article notes that an abstract analysis of constitutionalism without recourse to the democratic process is meaningless and also analyses the structure of government of some selected African countries. These are examined the extent to which the doctrine operates within the arms of government and concludes that it should not just be regarded as a general constitutional principle but made rigid or perhaps effective and binding through law and institutional reforms.Keywords: checks and balances, constitutionalism, democratic process, separation of power
Procedia PDF Downloads 1251575 Tillage and Intercropping Effects on Growth and Yield of Groundnut in Maize/Groundnut Cropping System
Authors: Oyewole Charles Iledun, Shuaib Harira, Ezeogueri-Oyewole Anne Nnenna
Abstract:
Due to high population pressure/human activities competing for agricultural land, the need to maximize the productivity of available land has become necessary; this has not been achievable in the tropics with monoculture systems where a single harvest per season is the practice. Thus, this study evaluates intercropping combination and tillage practice on yield and yield components of groundnut in a mixture with maize. The trial was conducted in the rainy seasons of 2020 and 2021 at the Kogi State University Students’ Research and Demonstration Farm, Latitude 70 301 and Longitude 70 091 E in the Southern Guinea Savannah agro-ecological zone of Nigeria. Treatment consisted of three tillage practices [as main plot factor] and five intercropping combinations [subplot factor] assigned to a 3 x 5 Factorial experiment replicated four times. Data were collected for growth, development, yield components, and yield of groundnut. Data collected were subjected to Statistical Analysis in line with Factorial Experiments. Means found to be statistically significant at 5 % probability were separated using the LSD method. Regarding yield components and yield related parameters in groundnuts, better performance was observed in cole cropped groundnut plots compared to the intercropped plots. However, intercropping groundnut with maize was generally advantageous, with LER greater than unity. Among the intercrops, the highest LERs were observed when one row of maize was cropped with one row of groundnut, with the least LER recorded in intercropping two rows of maize with one row of groundnut. For the tillage operations, zero tillage gave the highest LERs in both seasons, while the least LERs were recorded when the groundnut was planted on ridges. Since the highest LERs were observed when one row of maize was intercropped with one row of groundnut, this level of crop combination is recommended for the study area, while ridging may not be necessary to get good groundnut yield, particularly under similar soil conditions as obtained in the experimental area, and with similar rainfall observed during the experimental period.Keywords: canopy height, leaf number, haulm yield / ha, pod yield / ha, harvest index and shelling percentage
Procedia PDF Downloads 221574 The Key Role of a Bystander Improving the Effectiveness of Cardiopulmonary Resuscitation Performed in Extra-Urban Areas
Authors: Leszek Szpakowski, Daniel Celiński, Sławomir Pilip, Grzegorz Michalak
Abstract:
The aim of the study was to analyse the usefulness of the 'E-rescuer' pilot project planned to be implemented in a chosen area of Eastern Poland in the cases of suspected sudden cardiac arrests in the extra-urban areas. Inventing an application allowing to dispatch simultaneously both Medical Emergency Teams and the E-rescuer to the place of the accident is the crucial assumption of the mentioned pilot project. The E-rescuer is defined to be the trained person able to take effective basic life support and to use automated external defibrillator. Having logged in using a smartphone, the E-rescuer's readiness is reported online to provide cardiopulmonary resuscitation exactly at the given location. Due to the accurately defined location of the E-rescuer, his arrival time is possible to be precisely fixed, and the substantive support through the displayed algorithms is capable of being provided as well. Having analysed the medical records in the years 2015-2016, cardiopulmonary resuscitation was considered to be effective when an early indication of circulation was provided, and the patient was taken to hospital. In the mentioned term, there were 2.291 cases of a sudden cardiac arrest. Cardiopulmonary resuscitation was taken in 621 patients in total including 205 people in the urban area and 416 in the extra-urban areas. The effectiveness of cardiopulmonary resuscitation in the extra-urban areas was much lower (33,8%) than in the urban (50,7%). The average ambulance arrival time was respectively longer in the extra-urban areas, and it was 12,3 minutes while in the urban area 3,3 minutes. There was no significant difference in the average age of studied patients - 62,5 and 64,8 years old. However, the average ambulance arrival time was 7,6 minutes for effective resuscitations and 10,5 minutes for ineffective ones. Hence, the ambulance arrival time is a crucial factor influencing on the effectiveness of cardiopulmonary resuscitation, especially in the extra-urban areas where it is much longer than in the urban. The key role of trained E-rescuers being nearby taking basic life support before the ambulance arrival can effectively support Emergency Medical Services System in Poland.Keywords: basic life support, bystander, effectiveness, resuscitation
Procedia PDF Downloads 2031573 Barriers for Appropriate Palliative Symptom Management: A Qualitative Research in Kazakhstan, a Medium-Income Transitional-Economy Country
Authors: Ibragim Issabekov, Byron Crape, Lyazzat Toleubekova
Abstract:
Background: Palliative care substantially improves the quality of life of terminally-ill patients. Symptom control is one of the keystones in the management of patients in palliative care settings, lowering distress as well as improving the quality of life of patients with end-stage diseases. The most common symptoms causing significant distress for patients are pain, nausea and vomiting, increased respiratory secretions and mental health issues like depression. Aims are: 1. to identify best practices in symptom management in palliative patients in accordance with internationally approved guidelines and compare aforementioned with actual practices in Kazakhstan; to evaluate the criteria for assessing symptoms in terminally-ill patients, 2. to review the availability and utilization of pharmaceutical agents for pain control, management of excessive respiratory secretions, nausea, and vomiting, and delirium and 3. to develop recommendations for the systematic approach to end-of-life symptom management in Kazakhstan. Methods: The use of qualitative research methods together with systematic literature review have been employed to provide a rigorous research process to evaluate current approaches for symptom management of palliative patients in Kazakhstan. Qualitative methods include in-depth semi-structured interviews of the healthcare professionals involved in palliative care provision. Results: Obstacles were found in appropriate provision of palliative care. Inadequate education and training to manage severe symptoms, poorly defined laws and regulations for palliative care provision, and a lack of algorithms and guidelines for care were major barriers in the effective provision of palliative care. Conclusion: Assessment of palliative care in this medium-income transitional-economy country is one of the first steps in the initiation of integration of palliative care into the existing health system. Achieving this requires identifying obstacles and resolving these issues.Keywords: end-of-life care, middle income country, palliative care, symptom control
Procedia PDF Downloads 2001572 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery
Authors: Forouzan Salehi Fergeni
Abstract:
Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine
Procedia PDF Downloads 501571 To Study Small for Gestational Age as a Risk Factor for Thyroid Dysfunction
Authors: Shilpa Varghese, Adarsh Eregowda
Abstract:
Introduction: The normal development and maturation of the central nervous system is significantly influenced by thyroid hormones. Small for gestational age (SGA) babies have a distinct hormonal profile than kids born at an acceptable birth weight for gestational age, according to several studies (AGA). In SGA babies, thyroid size is larger when expressed as a percentage of body weight, indicating that low thyroid hormone levels throughout foetal life may be partially compensated for. Numerous investigations have found that compared to full-term and preterm AGA neonates, SGA babies exhibit considerably decreased thyroid plasma levels. According to our hypothesis, term and preterm SGA newborns have greater thyroid-stimulating hormone (TSH) concentrations than those that are normal for gestational age (AGA) and a higher incidence of thyroid dysfunction. Need for the study: Clinically diagnosed Assessment of term SGA babies confirming thyroid dysfunction unclear Requirement and importance of ft4 along with tsh and comparative values of ft4 in SGA babies as compared to AGA babies unclear. Inclusion criteria : SGA infants including preterm (<37 weeks of gestation) term (37-40 weeks) – comparing with preterm and term AGA infants. 3.76 7.66 0 2 4 6 8 10 12 AGA Babies SGA Babies Mean Mean TSH Comparison 2.73 1.52 0 0.5 1 1.5 2 2.5 3 3.5 4 AGA Babies SGA Babies Mean Mean FT4 Comparison Discussion : According to this study, neonates with SGA had considerably higher TSH levels than newborns with AGA. Our findings have been supported by results from earlier research. The TSH level range was established to 7.5 mU/L in the study by Bosch-Giménez et al, found greater TSH concentrations in SGA newborns. Thyroid hormone levels from newborns that are tiny for gestational age were found to be higher than AGA in our investigation. According to Franco et al., blood T4 concentrations are lower in both preterm and term SGA infants, while TSH concentrations are only noticeably greater in term SGA infants compared to AGA ones. According to our study analysis, the SGA group had considerably greater FT4 concentrations. Therefore, our findings are consistent with those of the two studies that SGA babies have a higher incidence of transient hypothyroidism and need close follow-up. Conclusions: A greater frequency of thyroid dysfunction and considerably higher TSH values within the normal range were seen in preterm and term SGA babies. The SGA babies who exhibit these characteristics should have ongoing endocrinologic testing and periodic TFTs.Keywords: thyroid hormone, thyroid function tests, small for gestationl age, appropriate for gestational age
Procedia PDF Downloads 661570 Elastic Behaviour of Graphene Nanoplatelets Reinforced Epoxy Resin Composites
Authors: V. K. Srivastava
Abstract:
Graphene has recently attracted an increasing attention in nanocomposites applications because it has 200 times greater strength than steel, making it the strongest material ever tested. Graphene, as the fundamental two-dimensional (2D) carbon structure with exceptionally high crystal and electronic quality, has emerged as a rapidly rising star in the field of material science. Graphene, as defined, as a 2D crystal, is composed of monolayers of carbon atoms arranged in a honeycombed network with six-membered rings, which is the interest of both theoretical and experimental researchers worldwide. The name comes from graphite and alkene. Graphite itself consists of many graphite-sheets stacked together by weak van der Waals forces. This is attributed to the monolayer of carbon atoms densely packed into honeycomb structure. Due to superior inherent properties of graphene nanoplatelets (GnP) over other nanofillers, GnP particles were added in epoxy resin with the variation of weight percentage. It is indicated that the DMA results of storage modulus, loss modulus and tan δ, defined as the ratio of elastic modulus and imaginary (loss) modulus versus temperature were affected with addition of GnP in the epoxy resin. In epoxy resin, damping (tan δ) is usually caused by movement of the molecular chain. The tan δ of the graphene nanoplatelets/epoxy resin composite is much lower than that of epoxy resin alone. This finding suggests that addition of graphene nanoplatelets effectively impedes movement of the molecular chain. The decrease in storage modulus can be interpreted by an increasing susceptibility to agglomeration, leading to less energy dissipation in the system under viscoelastic deformation. The results indicates the tan δ increased with the increase of temperature, which confirms that tan δ is associated with magnetic field strength. Also, the results show that the nanohardness increases with increase of elastic modulus marginally. GnP filled epoxy resin gives higher value than the epoxy resin, because GnP improves the mechanical properties of epoxy resin. Debonding of GnP is clearly observed in the micrograph having agglomeration of fillers and inhomogeneous distribution. Therefore, DMA and nanohardness studies indiacte that the elastic modulus of epoxy resin is increased with the addition of GnP fillers.Keywords: agglomeration, elastic modulus, epoxy resin, graphene nanoplatelet, loss modulus, nanohardness, storage modulus
Procedia PDF Downloads 2641569 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria
Authors: Wale Agbaje
Abstract:
The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets
Procedia PDF Downloads 1611568 Transitioning towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges
Authors: Mozhdeh Khalili Kordabadi
Abstract:
Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. By adopting these strategies, the textile industry can contribute to a more sustainable and environmentally friendly future. Introduction: Textiles, particularly clothing, are essential to human existence. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion: The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension.
Procedia PDF Downloads 961567 Applying Unmanned Aerial Vehicle on Agricultural Damage: A Case Study of the Meteorological Disaster on Taiwan Paddy Rice
Authors: Chiling Chen, Chiaoying Chou, Siyang Wu
Abstract:
Taiwan locates at the west of Pacific Ocean and intersects between continental and marine climate. Typhoons frequently strike Taiwan and come with meteorological disasters, i.e., heavy flooding, landslides, loss of life and properties, etc. Global climate change brings more extremely meteorological disasters. So, develop techniques to improve disaster prevention and mitigation is needed, to improve rescue processes and rehabilitations is important as well. In this study, UAVs (Unmanned Aerial Vehicles) are applied to take instant images for improving the disaster investigation and rescue processes. Paddy rice fields in the central Taiwan are the study area. There have been attacked by heavy rain during the monsoon season in June 2016. UAV images provide the high ground resolution (3.5cm) with 3D Point Clouds to develop image discrimination techniques and digital surface model (DSM) on rice lodging. Firstly, image supervised classification with Maximum Likelihood Method (MLD) is used to delineate the area of rice lodging. Secondly, 3D point clouds generated by Pix4D Mapper are used to develop DSM for classifying the lodging levels of paddy rice. As results, discriminate accuracy of rice lodging is 85% by image supervised classification, and the classification accuracy of lodging level is 87% by DSM. Therefore, UAVs not only provide instant images of agricultural damage after the meteorological disaster, but the image discriminations on rice lodging also reach acceptable accuracy (>85%). In the future, technologies of UAVs and image discrimination will be applied to different crop fields. The results of image discrimination will be overlapped with administrative boundaries of paddy rice, to establish GIS-based assist system on agricultural damage discrimination. Therefore, the time and labor would be greatly reduced on damage detection and monitoring.Keywords: Monsoon, supervised classification, Pix4D, 3D point clouds, discriminate accuracy
Procedia PDF Downloads 3001566 Impacts of Community Forest on Forest Resources Management and Livelihood Improvement of Local People in Nepal
Authors: Samipraj Mishra
Abstract:
Despite the successful implementation of community forestry program, a number of pros and cons have been raised on Terai community forestry in the case of lowland locally called Terai region of Nepal, which is climatically belongs to tropical humid and possessed high quality forests in terms of ecology and economy. The study aims to investigate the local pricing strategy of forest products and its impacts on equitable forest benefit sharing, collection of community fund and carrying out livelihood improvement activities. The study was carried out on six community forests revealed that local people have substantially benefited from the community forests. However, being the region is heterogeneous by socio-economic conditions and forest resources have higher economical potential, the decision of low pricing strategy made by the local people have created inequality problems while sharing the forest benefits, and poorly contributed to community fund collection and consequently carrying out limited activities of livelihood improvement. The paper argued that the decision of low pricing strategy of forest products is counter-productive to promote the equitable benefit sharing in the areas of heterogeneous socio-economic conditions with high value forests. The low pricing strategy has been increasing accessibility of better off households at higher rate than poor; as such households always have higher affording capacity. It is also defective to increase the community fund and carry out activities of livelihood improvement effectively. The study concluded that unilateral decentralized forest policy and decision-making autonomy to the local people seems questionable unless their decision-making capacities are enriched sufficiently. Therefore, it is recommended that empowerment of decision-making capacity of local people and their respective institutions together with policy and program formulation are prerequisite for efficient and equitable community forest management and its long-term sustainability.Keywords: community forest, livelihood, socio-economy, pricing system, Nepal
Procedia PDF Downloads 2721565 The Exposure to Endocrine Disruptors during Pregnancy and Relation to Steroid Hormones
Authors: L. Kolatorova, J. Vitku, K. Adamcova, M. Simkova, M. Hill, A. Parizek, M. Duskova
Abstract:
Endocrine disruptors (EDs) are substances leaching from various industrial products, which are able to interfere with the endocrine system. Their harmful effects on human health are generally well-known, and exposure during fetal development may have lasting effects. Fetal exposure and transplacental transport of bisphenol A (BPA) have been recently studied; however, less is known about alternatives such as bisphenol S (BPS), bisphenol F (BPF) and bisphenol AF (BPAF), which have started to appear in consumer products. The human organism is usually exposed to the mixture of EDs, out of which parabens are otherwise known to transfer placenta. The usage of many cosmetic, pharmaceutical and consumer products during the pregnancy that may contain parabens and bisphenols has led to the need for investigation. The aim of the study was to investigate the transplacental transport of BPA, its alternatives, and parabens, and to study their relation to fetal steroidogenesis. BPA, BPS, BPF, BPAF, methylparaben, ethylparaben, propylparaben, butylparaben, benzylparaben and 15 steroids including estrogens, corticoids, androgens and immunomodulatory ones were determined in 27 maternal (37th week of gestation) and cord plasma samples using liquid chromatography - tandem mass spectrometry methods. The statistical evaluation of the results showed significantly higher levels of BPA (p=0.0455) in cord plasma compared to maternal plasma. The results from multiple regression models investigated that in cord plasma, methylparaben, propylparaben and the sum of all measured parabens were inversely associated with testosterone levels. To our best knowledge, this study is the first attempt to determine the levels of alternative bisphenols in the maternal and cord blood, and also the first study reporting the simultaneous detection of bisphenols, parabens, and steroids in these biological fluids. Our study confirmed the transplacental transport of BPA, with likely accumulation in the fetal compartment. The negative association of cord blood parabens and testosterone levels highlights their possible risks, especially for the development of male fetuses. Acknowledgements: This work was supported by the project MH CR 17-30528 A from the Czech Health Research Council, MH CZ - DRO (Institute of Endocrinology - EÚ, 00023761) and by the MEYS CR (OP RDE, Excellent research - ENDO.CZ).Keywords: bisphenol, endocrine disruptor, paraben, pregnancy, steroid
Procedia PDF Downloads 1781564 Development of a Systematic Approach to Assess the Applicability of Silver Coated Conductive Yarn
Authors: Y. T. Chui, W. M. Au, L. Li
Abstract:
Recently, wearable electronic textiles have been emerging in today’s market and were developed rapidly since, beside the needs for the clothing uses for leisure, fashion wear and personal protection, there also exist a high demand for the clothing to be capable for function in this electronic age, such as interactive interfaces, sensual being and tangible touch, social fabric, material witness and so on. With the requirements of wearable electronic textiles to be more comfortable, adorable, and easy caring, conductive yarn becomes one of the most important fundamental elements within the wearable electronic textile for interconnection between different functional units or creating a functional unit. The properties of conductive yarns from different companies can vary to a large extent. There are vitally important criteria for selecting the conductive yarns, which may directly affect its optimization, prospect, applicability and performance of the final garment. However, according to the literature review, few researches on conductive yarns on shelf focus on the assessment methods of conductive yarns for the scientific selection of material by a systematic way under different conditions. Therefore, in this study, direction of selecting high-quality conductive yarns is given. It is to test the stability and reliability of the conductive yarns according the problems industrialists would experience with the yarns during the every manufacturing process, in which, this assessment system can be classified into four stage. That is 1) Yarn stage, 2) Fabric stage, 3) Apparel stage and 4) End user stage. Several tests with clear experiment procedures and parameters are suggested to be carried out in each stage. This assessment method suggested that the optimal conducting yarns should be stable in property and resistant to various corrosions at every production stage or during using them. It is expected that this demonstration of assessment method can serve as a pilot study that assesses the stability of Ag/nylon yarns systematically at various conditions, i.e. during mass production with textile industry procedures, and from the consumer perspective. It aims to assist industrialists to understand the qualities and properties of conductive yarns and suggesting a few important parameters that they should be reminded of for the case of higher level of suitability, precision and controllability.Keywords: applicability, assessment method, conductive yarn, wearable electronics
Procedia PDF Downloads 5351563 [Keynote Talk]: Mental Health Challenges among Women in Dubai, Mental Health Needs Assessment for Dubai (2015), Public Health and Safety Department - Dubai Health Authority (DHA)
Authors: Kadhim Alabady
Abstract:
Purpose: Mental health problems affect women and men equally, but some are more common among women. To Provide a baseline of the current picture of major mental health challenges among women in Dubai. which can then be used to measure the impact of interventions or service development. Method: We have used mixed methods evaluation approaches. This was used to increase the validity of findings by using a variety of data collection techniques. We have integrated qualitative and quantitative methods in this piece of work. Conducting the two approaches is to explore issues that might not be highlighted enough through one method. Results: The key findings are: The prevalence of people who suffer from different types of mental disorders remains largely unknown, many women are unwilling to seek professional help because of lack of awareness or the stigma attached to it. -It is estimated there were around 2,928–4,392 mothers in Dubai (2014) suffering from postnatal depression of which 858–1,287, early intervention can be effective. -The system for managing health care for women with mental illness is fragmented and contains gaps and duplications. -It is estimated 1,029 girl aged 13–19 years affected with anorexia nervosa and there would be an estimated 1,029 girl aged 13–19 years affected with anorexia nervosa. Recommendations: -Work is required with primary health care in order to identify women with undiagnosed mental illnesses. Further work is undertaken within primary health care to assess disease registries with the aim of helping GP practices to improve their disease registers. -It is important to conduct local psychiatric morbidity surveys in Dubai to obtain data and assess the prevalence of essential mental health symptoms and conditions that are not routinely collected to get a clear sense of what is needed and to assist decision and policy making in getting a complete picture on what services are required. -Emergency Mental Health Care – there is a need for a crisis response team to respond to emergencies in the community. -Continuum of care – a significant gap in the services for women once they diagnosed with mental disorder.Keywords: mental health, depression, schizophrenia, women
Procedia PDF Downloads 2081562 The h3r Antagonist E159 Alleviates Neuroinflammation and Autistic-Like Phenotypes in BTBR T+ tf/J Mouse Model of Autism
Authors: Shilu Deepa Thomas, P. Jayaprakash, Dorota Łazewska, Katarzyna Kieć-Kononowicz, B. Sadek
Abstract:
A large body of evidence suggests the involvement of cognitive impairment, increased levels of inflammation and oxidative stress in the pathogenesis of autism spectrum disorder (ASD). ASD commonly coexists with psychiatric conditions like anxiety and cognitive challenges, and individuals with ASD exhibit significant levels of inflammation and immune system dysregulation. Previous Studies have identified elevated levels of pro-inflammatory markers such as IL-1β, IL-6, IL-2 and TNF-α, particularly in young children with ASD. The current therapeutic options for ASD show limited effectiveness, signifying the importance of exploring an efficient drugs to address the core symptoms. The role of histamine H3 receptors (H3Rs) in memory and the prospective role of H3R antagonists in pharmacological control of neurodegenerative disorders, e.g., ASD, is well-accepted. Hence, the effects of chronic systemic administration of H3R antagonist E159 on autistic-like repetitive behaviors, social deficits, memory and anxiety parameters, as well as neuroinflammation in Black and Tan BRachyury (BTBR) mice, were evaluated using Y maze, Barnes maze, self-grooming, open field and three chamber social test. E159 (2.5, 5 and 10 mg/kg, i.p.) dose-dependently ameliorated repetitive and compulsive behaviors by reducing the increased time spent in self-grooming and improved reduced spontaneous alternation in BTBR mice. Moreover, treatment with E159 attenuated disturbed anxiety levels and social deficits in tested male BTBR mice. Furthermore, E159 attenuated oxidative stress by significantly increasing GSH, CAT, and SOD and decreasing the increased levels of MDA in the cerebellum as well as the hippocampus. In addition, E159 decreased the elevated levels of proinflammatory cytokines (tumor necrosis factor (TNF-α), interleukin-1β (IL-1β), and IL-6). The observed results show that H3R antagonists like E159 may represent a promising novel pharmacological strategy for the future treatment of ASD.Keywords: histamine H3 receptors, antagonist E159, autism, behaviors, mice
Procedia PDF Downloads 641561 SIPTOX: Spider Toxin Database Information Repository System of Protein Toxins from Spiders by Using MySQL Method
Authors: Iftikhar Tayubi, Tabrej Khan, Rayan Alsulmi, Abdulrahman Labban
Abstract:
Spider produces a special kind of substance. This special kind of substance is called a toxin. The toxin is composed of many types of protein, which differs from species to species. Spider toxin consists of several proteins and non-proteins that include various categories of toxins like myotoxin, neurotoxin, cardiotoxin, dendrotoxin, haemorrhagins, and fibrinolytic enzyme. Protein Sequence information with references of toxins was derived from literature and public databases. From the previous findings, the Spider toxin would be the best choice to treat different types of tumors and cancer. There are many therapeutic regimes, which causes more side effects than treatment hence a different approach must be adopted for the treatment of cancer. The combinations of drugs are being encouraged, and dramatic outcomes are reported. Spider toxin is one of the natural cytotoxic compounds. Hence, it is being used to treat different types of tumors; especially its positive effect on breast cancer is being reported during the last few decades. The efficacy of this database is that it can provide a user-friendly interface for users to retrieve the information about Spiders, toxin and toxin protein of different Spiders species. SPIDTOXD provides a single source information about spider toxins, which will be useful for pharmacologists, neuroscientists, toxicologists, medicinal chemists. The well-ordered and accessible web interface allows users to explore the detail information of Spider and toxin proteins. It includes common name, scientific name, entry id, entry name, protein name and length of the protein sequence. The utility of this database is that it can provide a user-friendly interface for users to retrieve the information about Spider, toxin and toxin protein of different Spider species. The database interfaces will satisfy the demands of the scientific community by providing in-depth knowledge about Spider and its toxin. We have adopted the methodology by using A MySQL and PHP and for designing, we used the Smart Draw. The users can thus navigate from one section to another, depending on the field of interest of the user. This database contains a wealth of information on species, toxins, and clinical data, etc. This database will be useful for the scientific community, basic researchers and those interested in potential pharmaceutical Industry.Keywords: siptoxd, php, mysql, toxin
Procedia PDF Downloads 1821560 Meditation and Insight Interpretation Using Quantum Circle Based-on Experiment and Quantum Relativity Formalism
Authors: Somnath Bhattachryya, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
In this study and research on meditation and insight, the design and experiment with electronic circuits to manipulate the meditators' mental circles that call the chakras to have the same size is proposed. The shape of the circuit is 4-ports, called an add-drop multiplexer, that studies the meditation structure called the four-mindfulness foundation, then uses an AC power signal as an input instead of the meditation time function, where various behaviors with the method of re-filtering the signal (successive filtering), like eight noble paths. Start by inputting a signal at a frequency that causes the velocity of the wave on the perimeter of the circuit to cause particles to have the speed of light in a vacuum. The signal changes from electromagnetic waves and matter waves according to the velocity (frequency) until it reaches the point of the relativistic limit. The electromagnetic waves are transformed into photons with properties of wave-particle overcoming the limits of the speed of light. As for the matter wave, it will travel to the other side and cannot pass through the relativistic limit, called a shadow signal (echo) that can have power from increasing speed but cannot create speed faster than light or insight. In the experiment, the only the side where the velocity is positive, only where the speed above light or the corresponding frequency indicates intelligence. Other side(echo) can be done by changing the input signal to the other side of the circuit to get the same result. But there is no intelligence or speed beyond light. It is also used to study the stretching, contraction of time and wormholes that can be applied for teleporting, Bose-Einstein condensate and teleprinting, quantum telephone. The teleporting can happen throughout the system with wave-particle and echo, which is when the speed of the particle is faster than the stretching or contraction of time, the particle will submerge in the wormhole, when the destination and time are determined, will travel through the wormhole. In a wormhole, time can determine in the future and the past. The experimental results using the microstrip circuit have been found to be by the principle of quantum relativity, which can be further developed for both tools and meditation practitioners for quantum technology.Keywords: quantu meditation, insight picture, quantum circuit, absolute time, teleportation
Procedia PDF Downloads 641559 Effect of Sulphur Concentration on Microbial Population and Performance of a Methane Biofilter
Authors: Sonya Barzgar, J. Patrick, A. Hettiaratchi
Abstract:
Methane (CH4) is reputed as the second largest contributor to greenhouse effect with a global warming potential (GWP) of 34 related to carbon dioxide (CO2) over the 100-year horizon, so there is a growing interest in reducing the emissions of this gas. Methane biofiltration (MBF) is a cost effective technology for reducing low volume point source emissions of methane. In this technique, microbial oxidation of methane is carried out by methane-oxidizing bacteria (methanotrophs) which use methane as carbon and energy source. MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting methane to carbon dioxide (CO₂) and water (H₂O). Even though the biofiltration technique has been shown to be an efficient, practical and viable technology, the design and operational parameters, as well as the relevant microbial processes have not been investigated in depth. In particular, limited research has been done on the effects of sulphur on methane bio-oxidation. Since bacteria require a variety of nutrients for growth, to improve the performance of methane biofiltration, it is important to establish the input quantities of nutrients to be provided to the biofilter to ensure that nutrients are available to sustain the process. The study described in this paper was conducted with the aim of determining the influence of sulphur on methane elimination in a biofilter. In this study, a set of experimental measurements has been carried out to explore how the conversion of elemental sulphur could affect methane oxidation in terms of methanotrophs growth and system pH. Batch experiments with different concentrations of sulphur were performed while keeping the other parameters i.e. moisture content, methane concentration, oxygen level and also compost at their optimum level. The study revealed the tolerable limit of sulphur without any interference to the methane oxidation as well as the particular sulphur concentration leading to the greatest methane elimination capacity. Due to the sulphur oxidation, pH varies in a transient way which affects the microbial growth behavior. All methanotrophs are incapable of growth at pH values below 5.0 and thus apparently are unable to oxidize methane. Herein, the certain pH for the optimal growth of methanotrophic bacteria is obtained. Finally, monitoring methane concentration over time in the presence of sulphur is also presented for laboratory scale biofilters.Keywords: global warming, methane biofiltration (MBF), methane oxidation, methanotrophs, pH, sulphur
Procedia PDF Downloads 2361558 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings
Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller
Abstract:
Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram
Procedia PDF Downloads 2651557 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance
Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian
Abstract:
Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation
Procedia PDF Downloads 1111556 Review of Life-Cycle Analysis Applications on Sustainable Building and Construction Sector as Decision Support Tools
Abstract:
Considering the environmental issues generated by the building sector for its energy consumption, solid waste generation, water use, land use, and global greenhouse gas (GHG) emissions, this review pointed out to LCA as a decision-support tool to substantially improve the sustainability in the building and construction industry. The comprehensiveness and simplicity of LCA make it one of the most promising decision support tools for the sustainable design and construction of future buildings. This paper contains a comprehensive review of existing studies related to LCAs with a focus on their advantages and limitations when applied in the building sector. The aim of this paper is to enhance the understanding of a building life-cycle analysis, thus promoting its application for effective, sustainable building design and construction in the future. Comparisons and discussions are carried out between four categories of LCA methods: building material and component combinations (BMCC) vs. the whole process of construction (WPC) LCA,attributional vs. consequential LCA, process-based LCA vs. input-output (I-O) LCA, traditional vs. hybrid LCA. Classical case studies are presented, which illustrate the effectiveness of LCA as a tool to support the decisions of practitioners in the design and construction of sustainable buildings. (i) BMCC and WPC categories of LCA researches tend to overlap with each other, as majority WPC LCAs are actually developed based on a bottom-up approach BMCC LCAs use. (ii) When considering the influence of social and economic factors outside the proposed system by research, a consequential LCA could provide a more reliable result than an attributional LCA. (iii) I-O LCA is complementary to process-based LCA in order to address the social and economic problems generated by building projects. (iv) Hybrid LCA provides a more superior dynamic perspective than a traditional LCA that is criticized for its static view of the changing processes within the building’s life cycle. LCAs are still being developed to overcome their limitations and data shortage (especially data on the developing world), and the unification of LCA methods and data can make the results of building LCA more comparable and consistent across different studies or even countries.Keywords: decision support tool, life-cycle analysis, LCA tools and data, sustainable building design
Procedia PDF Downloads 1211555 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography
Authors: Nicole M. Martino
Abstract:
Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from. In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks
Procedia PDF Downloads 1541554 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model
Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck
Abstract:
Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure
Procedia PDF Downloads 2491553 Identification of New Familial Breast Cancer Susceptibility Genes: Are We There Yet?
Authors: Ian Campbell, Gillian Mitchell, Paul James, Na Li, Ella Thompson
Abstract:
The genetic cause of the majority of multiple-case breast cancer families remains unresolved. Next generation sequencing has emerged as an efficient strategy for identifying predisposing mutations in individuals with inherited cancer. We are conducting whole exome sequence analysis of germ line DNA from multiple affected relatives from breast cancer families, with the aim of identifying rare protein truncating and non-synonymous variants that are likely to include novel cancer predisposing mutations. Data from more than 200 exomes show that on average each individual carries 30-50 protein truncating mutations and 300-400 rare non-synonymous variants. Heterogeneity among our exome data strongly suggest that numerous moderate penetrance genes remain to be discovered, with each gene individually accounting for only a small fraction of families (~0.5%). This scenario marks validation of candidate breast cancer predisposing genes in large case-control studies as the rate-limiting step in resolving the missing heritability of breast cancer. The aim of this study is to screen genes that are recurrently mutated among our exome data in a larger cohort of cases and controls to assess the prevalence of inactivating mutations that may be associated with breast cancer risk. We are using the Agilent HaloPlex Target Enrichment System to screen the coding regions of 168 genes in 1,000 BRCA1/2 mutation-negative familial breast cancer cases and 1,000 cancer-naive controls. To date, our interim analysis has identified 21 genes which carry an excess of truncating mutations in multiple breast cancer families versus controls. Established breast cancer susceptibility gene PALB2 is the most frequently mutated gene (13/998 cases versus 0/1009 controls), but other interesting candidates include NPSR1, GSN, POLD2, and TOX3. These and other genes are being validated in a second cohort of 1,000 cases and controls. Our experience demonstrates that beyond PALB2, the prevalence of mutations in the remaining breast cancer predisposition genes is likely to be very low making definitive validation exceptionally challenging.Keywords: predisposition, familial, exome sequencing, breast cancer
Procedia PDF Downloads 493