Search results for: HEMS utility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 674

Search results for: HEMS utility

284 Numerical Investigation of a Spiral Bladed Tidal Turbine

Authors: Mohammad Fereidoonnezhad, Seán Leen, Stephen Nash, Patrick McGarry

Abstract:

From the perspective of research innovation, the tidal energy industry is still in its early stages. While a very small number of turbines have progressed to utility-scale deployment, blade breakage is commonly reported due to the enormous hydrodynamic loading applied to devices. The aim of this study is the development of computer simulation technologies for the design of next-generation fibre-reinforced composite tidal turbines. This will require significant technical advances in the areas of tidal turbine testing and multi-scale computational modelling. The complex turbine blade profiles are designed to incorporate non-linear distributions of airfoil sections to optimize power output and self-starting capability while reducing power fluctuations. A number of candidate blade geometries are investigated, ranging from spiral geometries to parabolic geometries, with blades arranged in both cylindrical and spherical configurations on a vertical axis turbine. A combined blade element theory (BET-start-up model) is developed in MATLAB to perform computationally efficient parametric design optimisation for a range of turbine blade geometries. Finite element models are developed to identify optimal fibre-reinforced composite designs to increase blade strength and fatigue life. Advanced fluid-structure-interaction models are also carried out to compute blade deflections following design optimisation.

Keywords: tidal turbine, composite materials, fluid-structure-interaction, start-up capability

Procedia PDF Downloads 94
283 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana

Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet

Abstract:

The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.

Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems

Procedia PDF Downloads 49
282 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 82
281 Kantian Epistemology in Examination of the Axiomatic Principles of Economics: The Synthetic a Priori in the Economic Structure of Society

Authors: Mirza Adil Ahmad Mughal

Abstract:

Transcendental analytics, in the critique of pure reason, combines space and time as conditions of the possibility of the phenomenon from the transcendental aesthetic with the pure magnitude-intuition notion. The property of continuity as a qualitative result of the additive magnitude brings the possibility of connecting with experience, even though only as a potential because of the a priori necessity from assumption, as syntheticity of the a priori task of a scientific method of philosophy given by Kant, which precludes the application of categories to something not empirically reducible to the content of such a category's corresponding and possible object. This continuity as the qualitative result of a priori constructed notion of magnitude lies as a fundamental assumption and property of, what in Microeconomic theory is called as, 'choice rules' which combine the potentially-empirical and practical budget-price pairs with preference relations. This latter result is the purest qualitative side of the choice rules', otherwise autonomously, quantitative nature. The theoretical, barring the empirical, nature of this qualitative result is a synthetic a priori truth, which, if at all, it should be, if the axiomatic structure of the economic theory is held to be correct. It has a potentially verifiable content as its possible object in the form of quantitative price-budget pairs. Yet, the object that serves the respective Kantian category is qualitative itself, which is utility. This article explores the validity of Kantian qualifications for this application of 'categories' to the economic structure of society.

Keywords: categories of understanding, continuity, convexity, psyche, revealed preferences, synthetic a priori

Procedia PDF Downloads 70
280 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 323
279 Development of a Consult Liaison Psychology Service: A Systematic Review

Authors: Ben J. Lippe

Abstract:

Consult Liaison Psychology services are overgrowing, given the robust empirical support of the utility of this service in hospital settings. These psychological services, including clinical assessment, applied psychotherapy, and consultation with other healthcare providers, have been shown to improve health outcomes for patients and bolster important areas of administrative interest such as decreased length of patient admission. However, there is little descriptive literature outlining the process and mechanisms of building or developing a Consult Liaison Psychology service. The main findings of this current conceptual work are intended to be clear in nature to elucidate the essential methods involved in developing consult liaison psychology programs, including thorough reviews of relevant behavioral health literature and inclusion of experiential outcomes. The diverse range of hospital settings and healthcare systems makes a “blueprint” method of program development challenging to define, yet important structural frameworks presented here based on the relevant literature and applied practice can help lay critical groundwork for program development in this growing area of psychological service. This conceptual approach addresses the prominent processes, as well as common programmatic and clinical pitfalls, involved in the event of a Consult Liaison Psychology service. This paper, including a systematic review of relevant literature, is intended to serve as a key program development reference for the development of Consult Liaison Psychology services, other related behavioral health programs, and to help inform further research efforts.

Keywords: behavioral health, consult liaison, health psychology, psychology program development

Procedia PDF Downloads 125
278 The Superiority of 18F-Sodium Fluoride PET/CT for Detecting Bone Metastases in Comparison with Other Bone Diagnostic Imaging Modalities

Authors: Mojtaba Mirmontazemi, Habibollah Dadgar

Abstract:

Bone is the most common metastasis site in some advanced malignancies, such as prostate and breast cancer. Bone metastasis generally indicates fewer prognostic factors in these patients. Different radiological and molecular imaging modalities are used for detecting bone lesions. Molecular imaging including computed tomography, magnetic resonance imaging, planar bone scintigraphy, single-photon emission tomography, and positron emission tomography as noninvasive visualization of the biological occurrences has the potential to exact examination, characterization, risk stratification and comprehension of human being diseases. Also, it is potent to straightly visualize targets, specify clearly cellular pathways and provide precision medicine for molecular targeted therapies. These advantages contribute implement personalized treatment for each patient. Currently, NaF PET/CT has significantly replaced standard bone scintigraphy for the detection of bone metastases. On one hand, 68Ga-PSMA PET/CT has gained high attention for accurate staging of primary prostate cancer and restaging after biochemical recurrence. On the other hand, FDG PET/CT is not commonly used in osseous metastases of prostate and breast cancer as well as its usage is limited to staging patients with aggressive primary tumors or localizing the site of disease. In this article, we examine current studies about FDG, NaF, and PSMA PET/CT images in bone metastases diagnostic utility and assess response to treatment in patients with breast and prostate cancer.

Keywords: skeletal metastases, fluorodeoxyglucose, sodium fluoride, molecular imaging, precision medicine, prostate cancer (68Ga-PSMA-11)

Procedia PDF Downloads 87
277 Implications of Dehusking and Aqueous Soaking on Anti-nutrients, Phytochemical Screening and Antioxidants Properties of Jack Beans (Canavalia Ensiformis L. DC)

Authors: Oseni Margaret Oladunni, Ogundele Joan Olayinka, Olusanya Olalekan Samuel, Akinniyi Modupe Olakintan

Abstract:

The world's growing population is pushing humans to look for alternative food sources among underutilised or wild plants. One of these food sources has been identified as Canavalia enisiformis, or jack beans. The only issue with using jack beans is that they contain anti-nutrient chemicals, which must be removed or diminished in order for them to be fit for human consumption. The objective of this study is to determine the nutritional and industrial utility of Canavalia enisiformis by analysing the anti-nutrient, phytochemical, and antioxidant composition of raw whole seed and soaking dehusked seeds using established procedures. Phytate (23.48±0.24, 15.24±0.41 and 14.83±0.00), oxalate (4.32±0.09, 3.96±0.09 and 2.88±0.09), tannins (22.77±0.73, 18.68±0.03 and 17.50±0.46), and lectins (6.67±0.04, 6.20±0.01 and 6.42±0.07) exhibited the highest anti-nutrient values in raw whole seed and, at the very least, in dehusked, soaked seeds. The samples were subjected to phytochemical screening, which detected the presence of cardiac glycosides as well as anthraquinones, alkaloids, tannins, saponins, steroids, flavonoids, terpenoids, phlobatannins, and flavonoids. Due to the reduction in phytochemical contents quantified as a result of dehusking and soaking, phenolbatannins and anthraquinones were not found in the samples. The research findings also demonstrated elevated concentrations of several plausible phytochemical components with potential medical value, with the raw whole seed exhibiting the greatest capacity to scavenge free radicals. Accordingly, the study's findings validate the seed's therapeutic applications and imply that it might be an inexpensive source of antioxidants for humans and animals alike.

Keywords: dehusking, soaking, anti-nutrients, antioxidants, jack bean

Procedia PDF Downloads 20
276 The Scope and Effectiveness of Interactive Voice Response Technologies in Post-Operative Care

Authors: Zanib Nafees, Amir Razaghizad, Ibtisam Mahmoud, Abhinav Sharma, Renzo Cecere

Abstract:

More than one million surgeries are performed each year in Canada, resulting in more than 100,000 associated serious adverse events (SAEs) per year. These are defined as unintended injuries or complications that adversely affect the well-being of patients. In recent years, there has been a proliferation of digital health interventions that have the potential to assist, monitor, and educate patients—facilitating self-care following post-operative discharge. Among digital health, interventions are interactive-voice response technologies (IVRs), which have been shown to be highly effective in certain medical settings. Although numerous IVR-based interventions have been developed, their effectiveness and utility remain unclear, notably in post-operative settings. To the best of our knowledge, no systematic or scoping reviews have evaluated this topic to date. Thus, the objective of this scoping review protocol is to systematically map and explore the literature and evidence describing and examining IVR tools, implementation, evaluation, outcome, and experience for post-operative patients. The focus will be primarily on the evaluation of baseline performance status, clinical assessment, treatment outcomes, and patient management, including self-management and self-monitoring. The objective of this scoping review is to assess the extent of the literature to direct future research efforts by identifying gaps and limitations in the literature and to highlight relevant determinants of positive outcomes in the emerging field of IVR monitoring for health outcomes in post-operative patients.

Keywords: digital healthcare technologies, post-surgery, interactive voice technology, interactive voice response

Procedia PDF Downloads 241
275 Architecture for QoS Based Service Selection Using Local Approach

Authors: Gopinath Ganapathy, Chellammal Surianarayanan

Abstract:

Services are growing rapidly and generally they are aggregated into a composite service to accomplish complex business processes. There may be several services that offer the same required function of a particular task in a composite service. Hence a choice has to be made for selecting suitable services from alternative functionally similar services. Quality of Service (QoS)plays as a discriminating factor in selecting which component services should be selected to satisfy the quality requirements of a user during service composition. There are two categories of approaches for QoS based service selection, namely global and local approaches. Global approaches are known to be Non-Polynomial (NP) hard in time and offer poor scalability in large scale composition. As an alternative to global methods, local selection methods which reduce the search space by breaking up the large/complex problem of selecting services for the workflow into independent sub problems of selecting services for individual tasks are coming up. In this paper, distributed architecture for selecting services based on QoS using local selection is presented with an overview of local selection methodology. The architecture describes the core components, namely, selection manager and QoS manager needed to implement the local approach and their functions. Selection manager consists of two components namely constraint decomposer which decomposes the given global or workflow level constraints in local or task level constraints and service selector which selects appropriate service for each task with maximum utility, satisfying the corresponding local constraints. QoS manager manages the QoS information at two levels namely, service class level and individual service level. The architecture serves as an implementation model for local selection.

Keywords: architecture of service selection, local method for service selection, QoS based service selection, approaches for QoS based service selection

Procedia PDF Downloads 402
274 LIZTOXD: Inclusive Lizard Toxin Database by Using MySQL Protocol

Authors: Iftikhar A. Tayubi, Tabrej Khan, Mansoor M. Alsubei, Fahad A. Alsaferi

Abstract:

LIZTOXD provides a single source of high-quality information about proteinaceous lizard toxins that will be an invaluable resource for pharmacologists, neuroscientists, toxicologists, medicinal chemists, ion channel scientists, clinicians, and structural biologists. We will provide an intuitive, well-organized and user-friendly web interface that allows users to explore the detail information of Lizard and toxin proteins. It includes common name, scientific name, entry id, entry name, protein name and length of the protein sequence. The utility of this database is that it can provide a user-friendly interface for users to retrieve the information about Lizard, toxin and toxin protein of different Lizard species. These interfaces created in this database will satisfy the demands of the scientific community by providing in-depth knowledge about Lizard and its toxin. In the next phase of our project we will adopt methodology and by using A MySQL and Hypertext Preprocessor (PHP) which and for designing Smart Draw. A database is a wonderful piece of equipment for storing large quantities of data efficiently. The users can thus navigate from one section to another, depending on the field of interest of the user. This database contains a wealth of information on species, toxins, toxins, clinical data etc. LIZTOXD resource that provides comprehensive information about protein toxins from lizard toxins. The combination of specific classification schemes and a rich user interface allows researchers to easily locate and view information on the sequence, structure, and biological activity of these toxins. This manually curated database will be a valuable resource for both basic researchers as well as those interested in potential pharmaceutical and agricultural applications of lizard toxins.

Keywords: LIZTOXD, MySQL, PHP, smart draw

Procedia PDF Downloads 136
273 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 69
272 Research on Container Housing: A New Form of Informal Housing on Urban Temporary Land

Authors: Lufei Mao, Hongwei Chen, Zijiao Chai

Abstract:

Informal housing is a widespread phenomenon in developing countries. In many newly-emerging cities in China, rapid urbanization leads to an influx of population as well as a shortage of housing. Under this background, container housing, a new form of informal housing, gradually appears on a small scale on urban temporary land in recent years. Container housing, just as its name implies, transforms containers into small houses that allow migrant workers group to live in it. Scholars in other countries have established sound theoretical frameworks for informal housing study, but the research fruits seem rather limited on this small scale housing form. Unlike the cases in developed countries, these houses, which are outside urban planning, bring about various environmental, economic, social and governance issues. Aiming to figure out this new-born housing form, a survey mainly on two container housing settlements in Hangzhou, China was carried out to gather the information of them. Based on this thorough survey, the paper concludes the features and problems of infrastructure, environment and social communication of container housing settlements. The result shows that these containers were lacking of basic facilities and were restricted in a small mess temporary land. Moreover, because of the deficiency in management, the rental rights of these containers might not be guaranteed. Then the paper analyzes the factors affecting the formation and evolution of container housing settlements. It turns out that institutional and policy factors, market factors and social factors were the main three factors that affect the formation. At last, the paper proposes some suggestions for the governance of container housing and the utility pattern of urban temporary land.

Keywords: container housing, informal housing, urban temporary land, urban governance

Procedia PDF Downloads 232
271 Utility of CT Perfusion Imaging for Diagnosis and Management of Delayed Cerebral Ischaemia Following Subarachnoid Haemorrhage

Authors: Abdalla Mansour, Dan Brown, Adel Helmy, Rikin Trivedi, Mathew Guilfoyle

Abstract:

Introduction: Diagnosing delayed cerebral ischaemia (DCI) following aneurysmal subarachnoid haemorrhage (SAH) can be challenging, particularly in poor-grade patients. Objectives: This study sought to assess the value of routine CTP in identifying (or excluding) DCI and in guiding management. Methods: Eight-year retrospective neuroimaging study at a large UK neurosurgical centre. Subjects included a random sample of adult patients with confirmed aneurysmal SAH that had a CTP scan during their inpatient stay, over a 8-year period (May 2014 - May 2022). Data collected through electronic patient record and PACS. Variables included age, WFNS scale, aneurysm site, treatment, the timing of CTP, radiologist report, and DCI management. Results: Over eight years, 916 patients were treated for aneurysmal SAH; this study focused on 466 patients that were randomly selected. Of this sample, 181 (38.84%) had one or more CTP scans following brain aneurysm treatment (Total 318). The first CTP scan in each patient was performed at 1-20 days following ictus (median 4 days). There was radiological evidence of DCI in 83, and no reversible ischaemia was found in 80. Findings were equivocal in the remaining 18. Of the 103 patients treated with clipping, 49 had DCI radiological evidence, in comparison to 31 of 69 patients treated with endovascular embolization. The remaining 9 patients are either unsecured aneurysms or non-aneurysmal SAH. Of the patients with radiological evidence of DCI, 65 had a treatment change following the CTP directed at improving cerebral perfusion. In contrast, treatment was not changed for (61) patients without radiological evidence of DCI. Conclusion: CTP is a useful adjunct to clinical assessment in the diagnosis of DCI and is helpful in identifying patients that may benefit from intensive therapy and those in whom it is unlikely to be effective.

Keywords: SAH, vasospasm, aneurysm, delayed cerebral ischemia

Procedia PDF Downloads 43
270 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant

Authors: Cigdem Safak Saglam

Abstract:

Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.

Keywords: thermal power plant, lignite coal, pretreatment, demineralization, electrodialysis, recycling, ash dampening

Procedia PDF Downloads 457
269 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community

Authors: Mohamed Ghorab

Abstract:

Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.  

Keywords: distributed energy resources, network energy system, optimization, microgeneration system

Procedia PDF Downloads 170
268 The Utilization of Tea Extract within the Realm of the Food Industry

Authors: Raana Babadi Fathipour

Abstract:

Tea, a beverage widely cherished across the globe, has captured the interest of scholars with its recent acknowledgement for possessing noteworthy health advantages. Of particular significance is its proven ability to ward off ailments such as cancer and cardiovascular afflictions. Moreover, within the realm of culinary creations, lipid oxidation poses a significant challenge for food product development. In light of these aforementioned concerns, this present discourse turns its attention towards exploring diverse methodologies employed in extracting polyphenols from various types of tea leaves and examining their utility within the vast landscape of the ever-evolving food industry. Based on the discoveries unearthed in this comprehensive investigation, it has been determined that the fundamental constituents of tea are polyphenols possessed of intrinsic health-enhancing properties. This includes an assortment of catechins, namely epicatechin, epigallocatechin, epicatechin gallate, and epigallocatechin gallate. Moreover, gallic acid, flavonoids, flavonols and theaphlavins have also been detected within this aromatic beverage. Of these myriad components examined vigorously in this study's analysis, catechin emerges as particularly beneficial. Multiple techniques have emerged over time to successfully extract key compounds from tea plants, including solvent-based extraction methodologies, microwave-assisted water extraction approaches and ultrasound-assisted extraction techniques. In particular, consideration is given to microwave-assisted water extraction method as a viable scheme which effectively procures valuable polyphenols from tea extracts. This methodology appears adaptable for implementation within sectors such as dairy production along with meat and oil industries alike.

Keywords: camellia sinensis, extraction, food application, shelf life, tea

Procedia PDF Downloads 45
267 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 166
266 Mapping a Data Governance Framework to the Continuum of Care in the Active Assisted Living Context

Authors: Gaya Bin Noon, Thoko Hanjahanja-Phiri, Laura Xavier Fadrique, Plinio Pelegrini Morita, Hélène Vaillancourt, Jennifer Teague, Tania Donovska

Abstract:

Active Assisted Living (AAL) refers to systems designed to improve the quality of life, aid in independence, and create healthier lifestyles for care recipients. As the population ages, there is a pressing need for non-intrusive, continuous, adaptable, and reliable health monitoring tools to support aging in place. AAL has great potential to support these efforts with the wide variety of solutions currently available, but insufficient efforts have been made to address concerns arising from the integration of AAL into care. The purpose of this research was to (1) explore the integration of AAL technologies and data into the clinical pathway, and (2) map data access and governance for AAL technology in order to develop standards for use by policy-makers, technology manufacturers, and developers of smart communities for seniors. This was done through four successive research phases: (1) literature search to explore existing work in this area and identify lessons learned; (2) modeling of the continuum of care; (3) adapting a framework for data governance into the AAL context; and (4) interviews with stakeholders to explore the applicability of previous work. Opportunities for standards found in these research phases included a need for greater consistency in language and technology requirements, better role definition regarding who can access and who is responsible for taking action based on the gathered data, and understanding of the privacy-utility tradeoff inherent in using AAL technologies in care settings.

Keywords: active assisted living, aging in place, internet of things, standards

Procedia PDF Downloads 112
265 A Cost-Benefit Analysis of Routinely Performed Transthoracic Echocardiography in the Setting of Acute Ischemic Stroke

Authors: John Rothrock

Abstract:

Background: The role of transthoracic echocardiography (TTE) in the diagnosis and management of patients with acute ischemic stroke remains controversial. While many stroke subspecialist reserve TTE for selected patients, others consider the procedure obligatory for most or all acute stroke patients. This study was undertaken to assess the cost vs. benefit of 'routine' TTE. Methods: We examined a consecutive series of patients who were admitted to a single institution in 2019 for acute ischemic stroke and underwent TTE. We sought to determine the frequency with which the results of TTE led to a new diagnosis of cardioembolism, redirected therapeutic cerebrovascular management, and at least potentially influenced the short or long-term clinical outcome. We recorded the direct cost associated with TTE. Results: There were 1076 patients in the study group, all of whom underwent TTE. TTE identified an unsuspected source of possible/probable cardioembolism in 62 patients (6%), confirmed an initially suspected source (primarily endocarditis) in an additional 13 (1%) and produced findings that stimulated subsequent testing diagnostic of possible/probable cardioembolism in 7 patients ( < 1%). TTE results potentially influenced the clinical outcome in a total of 48 patients (4%). With a total direct cost of $1.51 million, the mean cost per case wherein TTE results potentially influenced the clinical outcome in a positive manner was $31,375. Diagnostically and therapeutically, TTE was most beneficial in 67 patients under the age of 55 who presented with 'cryptogenic' stroke, identifying patent foramen ovale in 21 (31%); closure was performed in 19. Conclusions: The utility of TTE in the setting of acute ischemic stroke is modest, with its yield greatest in younger patients with cryptogenic stroke. Given the greater sensitivity of transesophageal echocardiography in detecting PFO and evaluating the aortic arch, TTE’s role in stroke diagnosis would appear to be limited.

Keywords: cardioembolic, cost-benefit, stroke, TTE

Procedia PDF Downloads 99
264 Assessing the Legacy Effects of Wildfire on Eucalypt Canopy Structure of South Eastern Australia

Authors: Yogendra K. Karna, Lauren T. Bennett

Abstract:

Fire-tolerant eucalypt forests are one of the major forest ecosystems of south-eastern Australia and thought to be highly resistant to frequent high severity wildfires. However, the impact of different severity wildfires on the canopy structure of fire-tolerant forest type is under-studied, and there are significant knowledge gaps in relation to the assessment of tree and stand level canopy structural dynamics and recovery after fire. Assessment of canopy structure is a complex task involving accurate measurements of the horizontal and vertical arrangement of the canopy in space and time. This study examined the utility of multitemporal, small-footprint lidar data to describe the changes in the horizontal and vertical canopy structure of fire-tolerant eucalypt forests seven years after wildfire of different severities from the tree to stand level. Extensive ground measurements were carried out in four severity classes to describe and validate canopy cover and height metrics as they change after wildfire. Several metrics such as crown height and width, crown base height and clumpiness of crown were assessed at tree and stand level using several individual tree top detection and measurement algorithm. Persistent effects of high severity fire 8 years after both on tree crowns and stand canopy were observed. High severity fire increased the crown depth but decreased the crown projective cover leading to more open canopy.

Keywords: canopy gaps, canopy structure, crown architecture, crown projective cover, multi-temporal lidar, wildfire severity

Procedia PDF Downloads 141
263 Enhanced Production of Endo-β-1,4-Xylanase from a Newly Isolated Thermophile Geobacillus stearothermophilus KIBGE-IB29 for Prospective Industrial Applications

Authors: Zainab Bibi, Afsheen Aman, Shah Ali Ul Qader

Abstract:

Endo-β-1,4-xylanases [EC 3.2.1.8] are one of the major groups of enzymes that are involved in degradation process of xylan and have several applications in food, textile and paper processing industries. Due to broad utility of endo-β-1,4-xylanase, researchers are focusing to increase the productivity of this hydrolase from various microbial species. Harsh industrial condition, faster reaction rate and efficient hydrolysis of xylan with low risk of contamination are critical requirements of industry that can be fulfilled by synthesizing the enzyme with efficient properties. In the current study, a newly isolated thermophile Geobacillus stearothermophilus KIBGE-IB29 was used in order to attain the maximum production of endo-1,4-β-xylanase. Bacterial culture was isolated from soil, collected around the blast furnace site of a steel processing mill, Karachi. Optimization of various nutritional and physical factors resulted the maximum synthesis of endo-1,4-β-xylanase from a thermophile. High production yield was achieved at 60°C and pH-6.0 after 24 hours of incubation period. Various nitrogen sources viz. peptone, yeast extract and meat extract improved the enzyme synthesis with 0.5%, 0.2% and 0.1% optimum concentrations. Dipotassium hydrogen phosphate (0.25%), potassium dihydrogen phosphate (0.05%), ammonium sulfate (0.05%) and calcium chloride (0.01%) were noticed as valuable salts to improve the production of enzyme. The thermophilic nature of isolate, with its broad pH stability profile and reduced fermentation time indicates its importance for effective xylan saccharification and for large scale production of endo-1,4-β-xylanase.

Keywords: geobacillus, optimization, production, xylanase

Procedia PDF Downloads 293
262 Diagnostic and Prognostic Use of Kinetics of Microrna and Cardiac Biomarker in Acute Myocardial Infarction

Authors: V. Kuzhandai Velu, R. Ramesh

Abstract:

Background and objectives: Acute myocardial infarction (AMI) is the most common cause of mortality and morbidity. Over the last decade, microRNAs (miRs) have emerged as a potential marker for detecting AMI. The current study evaluates the kinetics and importance of miRs in the differential diagnosis of ST-segment elevated MI (STEMI) and non-STEMI (NSTEMI) and its correlation to conventional biomarkers and to predict the immediate outcome of AMI for arrhythmias and left ventricular (LV) dysfunction. Materials and Method: A total of 100 AMI patients were recruited for the study. Routine cardiac biomarker and miRNA levels were measured during diagnosis and serially at admission, 6, 12, 24, and 72hrs. The baseline biochemical parameters were analyzed. The expression of miRs was compared between STEMI and NSTEMI at different time intervals. Diagnostic utility of miR-1, miR-133, miR-208, and miR-499 levels were analyzed by using RT-PCR and with various diagnostics statistical tools like ROC, odds ratio, and likelihood ratio. Results: The miR-1, miR-133, and miR-499 showed peak concentration at 6 hours, whereas miR-208 showed high significant differences at all time intervals. miR-133 demonstrated the maximum area under the curve at different time intervals in the differential diagnosis of STEMI and NSTEMI which was followed by miR-499 and miR-208. Evaluation of miRs for predicting arrhythmia and LV dysfunction using admission sample demonstrated that miR-1 (OR = 8.64; LR = 1.76) and miR-208 (OR = 26.25; LR = 5.96) showed maximum odds ratio and likelihood respectively. Conclusion: Circulating miRNA showed a highly significant difference between STEMI and NSTEMI in AMI patients. The peak was much earlier than the conventional biomarkers. miR-133, miR-208, and miR-499 can be used in the differential diagnosis of STEMI and NSTEMI, whereas miR-1 and miR-208 could be used in the prediction of arrhythmia and LV dysfunction, respectively.

Keywords: myocardial infarction, cardiac biomarkers, microRNA, arrhythmia, left ventricular dysfunction

Procedia PDF Downloads 106
261 Portfolio Assessment and English as a Foreign Language Aboriginal Students’ English Learning Outcome in Taiwan

Authors: Li-Ching Hung

Abstract:

The lack of empirical research on portfolio assessment in aboriginal EFL English classes of junior high schools in Taiwan may inhibit EFL teachers from appreciating the utility of this alternative assessment approach. This study addressed the following research questions: 1) understand how aboriginal EFL students and instructors of junior high schools in Taiwan perceive portfolio assessment, and 2) how portfolio assessment affects Taiwanese aboriginal EFL students’ learning outcomes. Ten classes of five junior high schools in Taiwan (from different regions of Taiwan) participated in this study. Two classes from each school joined the study, and each class was randomly assigned as a control group, and one was the experimental group. These five junior high schools consisted of at least 50% of aboriginal students. A mixed research design was utilized. The instructor of each class implemented a portfolio assessment for 15 weeks of the 2015 Fall Semester. At the beginning of the semester, all participants took a GEPT test (pretest), and in the 15th week, all participants took the same level of GEPT test (post-test). Scores of students’ GEPT tests were checked by the researcher as supplemental data in order to understand each student’s performance. In addition, each instructor was interviewed to provide qualitative data concerning students’ general learning performance and their perception of implementing portfolio assessments in their English classes. The results of this study were used to provide suggestions for EFL instructors while modifying their lesson plans regarding assessment. In addition, the empirical data were used as references for EFL instructors implementing portfolio assessments in their classes effectively.

Keywords: assessment, portfolio assessment, qualitative design, aboriginal ESL students

Procedia PDF Downloads 112
260 Spectroscopic Study of Tb³⁺ Doped Calcium Aluminozincate Phosphor for Display and Solid-State Lighting Applications

Authors: Sumandeep Kaur, Allam Srinivasa Rao, Mula Jayasimhadri

Abstract:

In recent years, rare earth (RE) ions doped inorganic luminescent materials are seeking great attention due to their excellent physical and chemical properties. These materials offer high thermal and chemical stability and exhibit good luminescence properties due to the presence of RE ions. The luminescent properties of these materials are attributed to their intra-configurational f-f transitions in RE ions. A series of Tb³⁺ doped calcium aluminozincate has been synthesized via sol-gel method. The structural and morphological studies have been carried out by recording X-ray diffraction patterns and SEM image. The luminescent spectra have been recorded for a comprehensive study of their luminescence properties. The XRD profile reveals the single-phase orthorhombic crystal structure with an average crystallite size of 65 nm as calculated by using DebyeScherrer equation. The SEM image exhibits completely random, irregular morphology of micron size particles of the prepared samples. The optimization of luminescence has been carried out by varying the dopant Tb³⁺ concentration within the range from 0.5 to 2.0 mol%. The as-synthesized phosphors exhibit intense emission at 544 nm pumped at 478 nm excitation wavelength. The optimized Tb³⁺ concentration has been found to be 1.0 mol% in the present host lattice. The decay curves show bi-exponential fitting for the as-synthesized phosphor. The colorimetric studies show green emission with CIE coordinates (0.334, 0.647) lying in green region for the optimized Tb³⁺ concentration. This report reveals the potential utility of Tb³⁺ doped calcium aluminozincate phosphors for display and solid-state lighting devices.

Keywords: concentration quenching, phosphor, photoluminescence, XRD

Procedia PDF Downloads 122
259 Design of a Hand-Held, Clamp-on, Leakage Current Sensor for High Voltage Direct Current Insulators

Authors: Morné Roman, Robert van Zyl, Nishanth Parus, Nishal Mahatho

Abstract:

Leakage current monitoring for high voltage transmission line insulators is of interest as a performance indicator. Presently, to the best of our knowledge, there is no commercially available, clamp-on type, non-intrusive device for measuring leakage current on energised high voltage direct current (HVDC) transmission line insulators. The South African power utility, Eskom, is investigating the development of such a hand-held sensor for two important applications; first, for continuous real-time condition monitoring of HVDC line insulators and, second, for use by live line workers to determine if it is safe to work on energised insulators. In this paper, a DC leakage current sensor based on magnetic field sensing techniques is developed. The magnetic field sensor used in the prototype can also detect alternating current up to 5 MHz. The DC leakage current prototype detects the magnetic field associated with the current flowing on the surface of the insulator. Preliminary HVDC leakage current measurements are performed on glass insulators. The results show that the prototype can accurately measure leakage current in the specified current range of 1-200 mA. The influence of external fields from the HVDC line itself on the leakage current measurements is mitigated through a differential magnetometer sensing technique. Thus, the developed sensor can perform measurements on in-service HVDC insulators. The research contributes to the body of knowledge by providing a sensor to measure leakage current on energised HVDC insulators non-intrusively. This sensor can also be used by live line workers to inform them whether or not it is safe to perform maintenance on energized insulators.

Keywords: direct current, insulator, leakage current, live line, magnetic field, sensor, transmission lines

Procedia PDF Downloads 150
258 Utility of Geospatial Techniques in Delineating Groundwater-Dependent Ecosystems in Arid Environments

Authors: Mangana B. Rampheri, Timothy Dube, Farai Dondofema, Tatenda Dalu

Abstract:

Identifying and delineating groundwater-dependent ecosystems (GDEs) is critical to the well understanding of the GDEs spatial distribution as well as groundwater allocation. However, this information is inadequately understood due to limited available data for the most area of concerns. Thus, this study aims to address this gap using remotely sensed, analytical hierarchy process (AHP) and in-situ data to identify and delineate GDEs in Khakea-Bray Transboundary Aquifer. Our study developed GDEs index, which integrates seven explanatory variables, namely, Normalized Difference Vegetation Index (NDVI), Modified Normalized Difference Water Index (MNDWI), Land-use and landcover (LULC), slope, Topographic Wetness Index (TWI), flow accumulation and curvature. The GDEs map was delineated using the weighted overlay tool in ArcGIS environments. The map was spatially classified into two classes, namely, GDEs and Non-GDEs. The results showed that only 1,34 % (721,91 km2) of the area is characterised by GDEs. Finally, groundwater level (GWL) data was used for validation through correlation analysis. Our results indicated that: 1) GDEs are concentrated at the northern, central, and south-western part of our study area, and 2) the validation results showed that GDEs classes do not overlap with GWL located in the 22 boreholes found in the given area. However, the results show a possible delineation of GDEs in the study area using remote sensing and GIS techniques along with AHP. The results of this study further contribute to identifying and delineating priority areas where appropriate water conservation programs, as well as strategies for sustainable groundwater development, can be implemented.

Keywords: analytical hierarchy process (AHP), explanatory variables, groundwater-dependent ecosystems (GDEs), khakea-bray transboundary aquifer, sentinel-2

Procedia PDF Downloads 82
257 Expanding the Therapeutic Utility of Curcumin

Authors: Azza H. El-Medany, Hanan H. Hagar, Omnia A. Nayel, Jamila H. El-Medany

Abstract:

In search for drugs that can target cancer cell micro-environment in as much as being able to halt malignant cellular transformation, the natural dietary phytochemical curcumin was currently assessed in DMH-induced colorectal cancer rat model. The study enrolled 50 animals divided into a control group (n=10) and DMH-induced colorectal cancer control group (n=20) (20mg/kg-body weight for 28 weeks) versus curcumin-treated group (n=20) (160 mg/kg suspension daily oral for further 8 weeks). Treatment by curcumin succeeded to significantly decrease the percent of ACF and tended to normalize back the histological changes retrieved in adenomatous and stromal cells induced by DMH. The drug also significantly elevated GSH and significantly reduced most of the accompanying biochemical elevations (namely MDA, TNF-α, TGF-β and COX2) observed in colonic carcinomatous tissue, induced by DMH, thus succeeding to revert that of MDA, COX2 and TGF-β back to near normal as justified by being non-significantly altered as compared to normal controls. The only exception was PAF that was insignificantly altered by the drug. When taken together, it could be concluded that curcumin possess the potentiality to halt some of the orchestrated cross-talk between cancerous transformation and its micro-environmental niche that contributes to cancer initiation, progression and metastasis in this experimental cancer colon model. Envisioning these merits to a drug with already known safety preferentiality, awaits final results of current ongoing clinical trials, before curcumin can be added to the new therapeutic armamentarium of anticancer therapy.

Keywords: curcumin, dimethyl hydralazine, aberrant crypt foci, malondialdehyde, reduced glutathione, cyclooxygenase-2, tumour necrosis factor-alpha, transforming growth factor-beta, platelet activating factor

Procedia PDF Downloads 275
256 Violence and Challenges in the Pamir Hindu Kush: A Study of the Impact of Change on a Central but Unknown Region

Authors: Skander Ben Mami

Abstract:

Despite its particular patterns and historical importance, the remote region of the Pamir Hindu Kush still lacks public recognition, as well as scientific substance, because of the abundance of classical state-centred geopolitical studies, the resilience of (inter)national narratives, and the political utility of the concepts of 'Central Asia' and 'South Asia'. However, this specific region of about 100 million inhabitants and located at the criss-cross of four geopolitical areas (Indian, Iranian, Chinese and Russian) over a territory of half a million square kilometres features a string of patterns that set it apart from the neighbouring areas of the Fergana, the Gansu and Punjab. Moreover, the Pamir Hindu Kush undergoes a series of parallel social and economic transformations that deserve scrutiny for their strong effect on the people’s lifestyle, particularly in three major urban centres (Aksu in China, Bukhara in Uzbekistan and Islamabad in Pakistan) and their immediate rural surroundings. While the involvement of various public and private stakeholders (States, NGOs, civil movements, private firms…) has undeniably resulted in positive elements (economic growth, connectivity, higher school attendance), it has in the same time generated a collection of negative effects (radicalizing, inequalities, pollution, territorial divide) that need to be addressed to strengthen regional and international security. This paper underscores the region’s strategical importance as the major hotbed and engine of insecurity and violence in Asia, notably in the context of Afghanistan’s enduring violence. It introduces the inner structures of the region, the different sources of violence as well as the governments’ responses to address it.

Keywords: geography, security, terrorism, urbanisation

Procedia PDF Downloads 105
255 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation

Procedia PDF Downloads 131