Search results for: highly accurate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6744

Search results for: highly accurate

474 Chronically Ill Patient Satisfaction: An Indicator of Quality of Service Provided at Primary Health Care Settings in Alexandria

Authors: Alyaa Farouk Ibrahim, Gehan ElSayed, Ola Mamdouh, Nazek AbdelGhany

Abstract:

Background: Primary health care (PHC) can be considered the first contact between the patient and the health care system. It includes all the basic health care services to be provided to the community. Patient's satisfaction regarding health care has often improved the provision of care, also considered as one of the most important measures for evaluating the health care. Objective: This study aims to identify patient’s satisfaction with services provided at the primary health care settings in Alexandria. Setting: Seven primary health care settings representing the seven zones of Alexandria governorate were selected randomly and included in the study. Subjects: The study comprised 386 patients attended the previously selected settings at least twice before the time of the study. Tools: Two tools were utilized for data collection; sociodemographic characteristics and health status structured interview schedule and patient satisfaction scale. Reliability test for the scale was done using Cronbach's Alpha test, the result of the test ranged between 0.717 and 0.967. The overall satisfaction was computed and divided into high, medium, and low satisfaction. Results: Age of the studied sample ranged between 19 and 62 years, more than half (54.2%) of them aged 40 to less than 60 years. More than half (52.8%) of the patients included in the study were diabetics, 39.1% of them were hypertensive, 19.2% had cardiovascular diseases, the rest of the sample had tumor, liver diseases, and orthopedic/neurological disorders (6.5%, 5.2% & 3.2%, respectively). The vast majority of the study group mentioned high satisfaction with overall service cost, environmental conditions, medical staff attitude and health education given at the PHC settings (87.8%, 90.7%, 86.3% & 90.9%, respectively), however, medium satisfaction was mostly reported concerning medical checkup procedures, follow-up data and referral system (41.2%, 28.5% & 28.9%, respectively). Score level of patient satisfaction with health services provided at the assessed Primary health care settings proved to be significantly associated with patients’ social status (P=0.003, X²=14.2), occupation (P=0.011, X²=11.2), and monthly income (P=0.039, X²=6.50). In addition, a significant association was observed between score level of satisfaction and type of illness (P=0.007, X²=9.366), type of medication (P=0.014, X²=9.033), prior knowledge about the health center (P=0.050, X²=3.346), and highly significant with the administrative zone (P=0.001, X²=55.294). Conclusion: The current study revealed that overall service cost, environmental conditions, staff attitude and health education at the assessed primary health care settings gained high patient satisfaction level, while, medical checkup procedures, follow-up, and referral system caused a medium level of satisfaction among assessed patients. Nevertheless, social status, occupation, monthly income, type of illness, type of medication and administrative zones are all factors influencing patient satisfaction with services provided at the health facilities.

Keywords: patient satisfaction, chronic illness, quality of health service, quality of service indicators

Procedia PDF Downloads 350
473 Training During Emergency Response to Build Resiliency in Water, Sanitation, and Hygiene

Authors: Lee Boudreau, Ash Kumar Khaitu, Laura A. S. MacDonald

Abstract:

In April 2015, a magnitude 7.8 earthquake struck Nepal, killing, injuring, and displacing thousands of people. The earthquake also damaged water and sanitation service networks, leading to a high risk of diarrheal disease and the associated negative health impacts. In response to the disaster, the Environment and Public Health Organization (ENPHO), a Kathmandu-based non-governmental organization, worked with the Centre for Affordable Water and Sanitation Technology (CAWST), a Canadian education, training and consulting organization, to develop two training programs to educate volunteers on water, sanitation, and hygiene (WASH) needs. The first training program was intended for acute response, with the second focusing on longer term recovery. A key focus was to equip the volunteers with the knowledge and skills to formulate useful WASH advice in the unanticipated circumstances they would encounter when working in affected areas. Within the first two weeks of the disaster, a two-day acute response training was developed, which focused on enabling volunteers to educate those affected by the disaster about local WASH issues, their link to health, and their increased importance immediately following emergency situations. Between March and October 2015, a total of 19 training events took place, with over 470 volunteers trained. The trained volunteers distributed hygiene kits and liquid chlorine for household water treatment. They also facilitated health messaging and WASH awareness activities in affected communities. A three-day recovery phase training was also developed and has been delivered to volunteers in Nepal since October 2015. This training focused on WASH issues during the recovery and reconstruction phases. The interventions and recommendations in the recovery phase training focus on long-term WASH solutions, and so form a link between emergency relief strategies and long-term development goals. ENPHO has trained 226 volunteers during the recovery phase, with training ongoing as of April 2016. In the aftermath of the earthquake, ENPHO found that its existing pool of volunteers were more than willing to help those in their communities who were more in need. By training these and new volunteers, ENPHO was able to reach many more communities in the immediate aftermath of the disaster; together they reached 11 of the 14 earthquake-affected districts. The collaboration between ENPHO and CAWST in developing the training materials was a highly collaborative and iterative process, which enabled the training materials to be developed within a short response time. By training volunteers on basic WASH topics during both the immediate response and the recovery phase, ENPHO and CAWST have been able to link immediate emergency relief to long-term developmental goals. While the recovery phase training continues in Nepal, CAWST is planning to decontextualize the training used in both phases so that it can be applied to other emergency situations in the future. The training materials will become part of the open content materials available on CAWST’s WASH Resources website.

Keywords: water and sanitation, emergency response, education and training, building resilience

Procedia PDF Downloads 305
472 De-Densifying Congested Cores of Cities and Their Emerging Design Opportunities

Authors: Faith Abdul Rasak Asharaf

Abstract:

Every city has a threshold known as urban carrying capacity based on which it can withstand a particular density of people, above which the city might need to resort to measures like expanding its boundaries or growing vertically. As a result of this circumstance, the number of squatter communities is growing, as is the claustrophobic feeling of being confined inside a "concrete jungle." The expansion of suburbs, commercial areas, and industrial real estate in the areas surrounding medium-sized cities has resulted in changes to their landscapes and urban forms, as well as a systematic shift in their role in the urban hierarchy when functional endowment and connections to other territories are considered. The urban carrying capacity idea provides crucial guidance for city administrators and planners in better managing, designing, planning, constructing, and distributing urban resources to satisfy the huge demands of an evergrowing urban population. An ecological footprint is a criterion of urban carrying capacity, which is the amount of land required to provide humanity with renewable resources and absorb its trash. However, as each piece of land has its unique carrying capacity, including ecological, social, and economic considerations, these metropolitan areas begin to reach a saturation point over time. Various city models have been tried throughout the years to meet the increasing urban population density by moving the zones of work, life, and leisure to achieve maximum sustainable growth. The current scenario is that of a vertical city and compact city concept, in which the maximum density of people is attempted to fit into a definite area using efficient land use and a variety of other strategies, but this has proven to be a very unsustainable method of growth, as evidenced by the COVID-19 period. Due to a shortage of housing and basic infrastructure, densely populated cities gave rise to massive squatter communities, unable to accommodate the overflowing migrants. To achieve optimum carrying capacity, planning measures such as polycentric city and diffuse city concepts can be implemented, which will help to relieve the congested city core by relocating certain sectors of the town to the city periphery, which will help to create newer spaces for design in terms of public space, transportation, and housing, which is a major concern in the current scenario. The study's goal is focused on suggesting design options and solutions in terms of placemaking for better urban quality and urban life for the citizens once city centres have been de-densified based on urban carrying capacity and ecological footprint, taking the case of Kochi as an apt example of a highly densified city core, focusing on Edappally, which is an agglomeration of many urban factors.

Keywords: urban carrying capacity, urbanization, urban sprawl, ecological footprint

Procedia PDF Downloads 78
471 Recycling the Lanthanides from Permanent Magnets by Electrochemistry in Ionic Liquid

Authors: Celine Bonnaud, Isabelle Billard, Nicolas Papaiconomou, Eric Chainet

Abstract:

Thanks to their high magnetization and low mass, permanent magnets (NdFeB and SmCo) have quickly became essential for new energies (wind turbines, electrical vehicles…). They contain large quantities of neodymium, samarium and dysprosium, that have been recently classified as critical elements and that therefore need to be recycled. Electrochemical processes including electrodissolution followed by electrodeposition are an elegant and environmentally friendly solution for the recycling of such lanthanides contained in permanent magnets. However, electrochemistry of the lanthanides is a real challenge as their standard potentials are highly negative (around -2.5V vs ENH). Consequently, non-aqueous solvents are required. Ionic liquids (IL) are novel electrolytes exhibiting physico-chemical properties that fulfill many requirements of the sustainable chemistry principles, such as extremely low volatility and non-flammability. Furthermore, their chemical and electrochemical properties (solvation of metallic ions, large electrochemical windows, etc.) render them very attractive media to implement alternative and sustainable processes in view of integrated processes. All experiments that will be presented were carried out using butyl-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide. Linear sweep, cyclic voltammetry and potentiostatic electrochemical techniques were used. The reliability of electrochemical experiments, performed without glove box, for the classic three electrodes cell used in this study has been assessed. Deposits were obtained by chronoamperometry and were characterized by scanning electron microscopy and energy-dispersive X-ray spectroscopy. The IL cathodic behavior under different constraints (argon, nitrogen, oxygen atmosphere or water content) and using several electrode materials (Pt, Au, GC) shows that with argon gas flow and gold as a working electrode, the cathodic potential can reach the maximum value of -3V vs Fc+/Fc; thus allowing a possible reduction of lanthanides. On a gold working electrode, the reduction potential of samarium and neodymium was found to be -1.8V vs Fc+/Fc while that of dysprosium was -2.1V vs Fc+/Fc. The individual deposits obtained were found to be porous and presented some significant amounts of C, N, F, S and O atoms. Selective deposition of neodymium in presence of dysprosium was also studied and will be discussed. Next, metallic Sm, Nd and Dy electrodes were used in replacement of Au, which induced changes in the reduction potential values and the deposit structures of lanthanides. The individual corrosion potentials were also measured in order to determine the parameters influencing the electrodissolution of these metals. Finally, a full recycling process was investigated. Electrodissolution of a real permanent magnet sample was monitored kinetically. Then, the sequential electrodeposition of all lanthanides contained in the IL was investigated. Yields, quality of the deposits and consumption of chemicals will be discussed in depth, in view of the industrial feasibility of this process for real permanent magnets recycling.

Keywords: electrodeposition, electrodissolution, ionic liquids, lanthanides, rcycling

Procedia PDF Downloads 272
470 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 27
469 Targeting Glucocorticoid Receptor Eliminate Dormant Chemoresistant Cancer Stem Cells in Glioblastoma

Authors: Aoxue Yang, Weili Tian, Haikun Liu

Abstract:

Brain tumor stem cells (BTSCs) are resistant to therapy and give rise to recurrent tumors. These rare and elusive cells are likely to disseminate during cancer progression, and some may enter dormancy, remaining viable but not increasing. The identification of dormant BTSCs is thus necessary to design effective therapies for glioblastoma (GBM) patients. Glucocorticoids (GCs) are used to treat GBM-associated edema. However, glucocorticoids participate in the physiological response to psychosocial stress, linked to poor cancer prognosis. This raises concern that glucocorticoids affect the tumor and BTSCs. Identifying markers specifically expressed by brain tumor stem cells (BTSCs) may enable specific therapies that spare their regular tissue-resident counterparts. By ribosome profiling analysis, we have identified that glycerol-3-phosphate dehydrogenase 1 (GPD1) is expressed by dormant BTSCs but not by NSCs. Through different stress-induced experiments in vitro, we found that only dexamethasone (DEXA) can significantly increase the expression of GPD1 in NSCs. Adversely, mifepristone (MIFE) which is classified as glucocorticoid receptors antagonists, could decrease GPD1 protein level and weaken the proliferation and stemness in BTSCs. Furthermore, DEXA can induce GPD1 expression in tumor-bearing mice brains and shorten animal survival, whereas MIFE has a distinct adverse effect that prolonged mice lifespan. Knocking out GR in NSC can block the upregulation of GPD1 inducing by DEXA, and we find the specific sequences on GPD1 promotor combined with GR, thus improving the efficiency of GPD1 transcription from CHIP-Seq. Moreover, GR and GPD1 are highly co-stained on GBM sections obtained from patients and mice. All these findings confirmed that GR could regulate GPD1 and loss of GPD1 Impairs Multiple Pathways Important for BTSCs Maintenance GPD1 is also a critical enzyme regulating glycolysis and lipid synthesis. We observed that DEXA and MIFE could change the metabolic profiles of BTSCs by regulating GPD1 to shift the transition of cell dormancy. Our transcriptome and lipidomics analysis demonstrated that cell cycle signaling and phosphoglycerides synthesis pathways contributed a lot to the inhibition of GPD1 caused by MIFE. In conclusion, our findings raise concern that treatment of GBM with GCs may compromise the efficacy of chemotherapy and contribute to BTSC dormancy. Inhibition of GR can dramatically reduce GPD1 and extend the survival duration of GBM-bearing mice. The molecular link between GPD1 and GR may give us an attractive therapeutic target for glioblastoma.

Keywords: cancer stem cell, dormancy, glioblastoma, glycerol-3-phosphate dehydrogenase 1, glucocorticoid receptor, dexamethasone, RNA-sequencing, phosphoglycerides

Procedia PDF Downloads 131
468 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 113
467 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 89
466 Cross-Language Variation and the ‘Fused’ Zone in Bilingual Mental Lexicon: An Experimental Research

Authors: Yuliya E. Leshchenko, Tatyana S. Ostapenko

Abstract:

Language variation is a widespread linguistic phenomenon which can affect different levels of a language system: phonological, morphological, lexical, syntactic, etc. It is obvious that the scope of possible standard alternations within a particular language is limited by a variety of its norms and regulations which set more or less clear boundaries for what is possible and what is not possible for the speakers. The possibility of lexical variation (alternate usage of lexical items within the same contexts) is based on the fact that the meanings of words are not clearly and rigidly defined in the consciousness of the speakers. Therefore, lexical variation is usually connected with unstable relationship between words and their referents: a case when a particular lexical item refers to different types of referents, or when a particular referent can be named by various lexical items. We assume that the scope of lexical variation in bilingual speech is generally wider than that observed in monolingual speech due to the fact that, besides ‘lexical item – referent’ relations it involves the possibility of cross-language variation of L1 and L2 lexical items. We use the term ‘cross-language variation’ to denote a case when two equivalent words of different languages are treated by a bilingual speaker as freely interchangeable within the common linguistic context. As distinct from code-switching which is traditionally defined as the conscious use of more than one language within one communicative act, in case of cross-language lexical variation the speaker does not perceive the alternate lexical items as belonging to different languages and, therefore, does not realize the change of language code. In the paper, the authors present research of lexical variation of adult Komi-Permyak – Russian bilingual speakers. The two languages co-exist on the territory of the Komi-Permyak District in Russia (Komi-Permyak as the ethnic language and Russian as the official state language), are usually acquired from birth in natural linguistic environment and, according to the data of sociolinguistic surveys, are both identified by the speakers as coordinate mother tongues. The experimental research demonstrated that alternation of Komi-Permyak and Russian words within one utterance/phrase is highly frequent both in speech perception and production. Moreover, our participants estimated cross-language word combinations like ‘маленькая /Russian/ нывка /Komi-Permyak/’ (‘a little girl’) or ‘мунны /Komi-Permyak/ домой /Russian/’ (‘go home’) as regular/habitual, containing no violation of any linguistic rules and being equally possible in speech as the equivalent intra-language word combinations (‘учöтик нывка’ /Komi-Permyak/ or ‘идти домой’ /Russian/). All the facts considered, we claim that constant concurrent use of the two languages results in the fact that a large number of their words tend to be intuitively interpreted by the speakers as lexical variants not only related to the same referent, but also referring to both languages or, more precisely, to none of them in particular. Consequently, we can suppose that bilingual mental lexicon includes an extensive ‘fused’ zone of lexical representations that provide the basis for cross-language variation in bilingual speech.

Keywords: bilingualism, bilingual mental lexicon, code-switching, lexical variation

Procedia PDF Downloads 148
465 Anisakidosis in Turkey: Serological Survey and Risk for Humans

Authors: E. Akdur Öztürk, F. İrvasa Bilgiç, A. Ludovisi , O. Gülbahar, D. Dirim Erdoğan, M. Korkmaz, M. Á. Gómez Morales

Abstract:

Anisakidosis is a zoonotic human fish-borne parasitic disease caused by accidental ingestion of anisakid third-stage larvae (L3) of members of the Anisakidae family present in infected marine fish or cephalopods. Infection with anisakid larvae can lead to gastric, intestinal, extra-gastrointestinal and gastroallergic forms of the disease. Anisakid parasites have been reported in almost all seas, particularly in the Mediterranean Sea. There is a remarkably high level of risk exposure to these zoonotic parasites as they are present in economically and ecologically important fish of Europe. Anisakid L3 larvae have been also detected in several fish species from the Aegean Sea. Turkey is a peninsular country surrounded by Black, Aegean and the Mediterranean Sea. In this country, fishing habit and fishery product consumption are highly common. In recent years, there was also an increase in the consumption of raw fish due to the increasing interest in the cuisine of the Far East countries. In different regions of Turkey, A. simplex (inMerluccius Merluccius Scomber japonicus, Trachurus mediterraneus, Sardina pilchardus, Engraulis encrasicolus, etc.), Anisakis spp., Contraceucum spp., Pseudoterronova spp. and, C. aduncum were identified as well. Although it is accepted both the presence of anisakid parasites in fish and fishery products in Turkey and the presence of Turkish people with allergic manifestations after fish consumption, there are no reports of human anisakiasis in this country. Given the high prevalence of anisakid parasites in the country, the absence of reports is likely not due to the absence of clinical cases rather to the unavailability of diagnostic tools and the low awareness of the presence of this infection. The aim of the study was to set up an IgE-Western Blot (WB) based test to detect the anisakidosis sensitization among Turkish people with a history of allergic manifestation related to fish consumption. To this end, crude worm antigens (CWA) and allergen enriched fraction (50-66% ) were prepared from L3 of A. simplex (s.l.) collected from Lepidopus caudatus fished in the Mediterranean Sea. These proteins were electrophoretically separated and transferred into the nitrocellulose membranes. By WB, specific proteins recognized by positive control serum samples from sensitized patients were visualized on nitrocellulose membranes by a colorimetric reaction. The CWA and 50–66% fraction showed specific bands, mainly due to Ani s 1 (20-22 kD) and Ani s 4 (9-10 kD). So far, a total of 7 serum samples from people with allergic manifestation and positive skin prick test (SPT) after fish consumption, have been tested and all of them resulted negative by WB, indicating the lack of sensitization to anisakids. This preliminary study allowed to set up a specific test and evidence the lack of correlation between both tests, SPT and WB. However, the sample size should be increased to estimate the anisakidosis burden in Turkish people.

Keywords: anisakidosis, fish parasite, serodiagnosis, Turkey

Procedia PDF Downloads 139
464 A Research on the Effect of Soil-Structure Interaction on the Dynamic Response of Symmetrical Reinforced Concrete Buildings

Authors: Adinew Gebremeskel Tizazu

Abstract:

The effect of soil-structure interaction on the dynamic response of reinforced concrete buildings of regular and symmetrical geometry are considered in this study. The structures are presumed to be generally embedded in a homogenous soil formation underlain by very stiff material or bedrock. The structure-foundation–soil system is excited at the base by an earthquake ground motion. The superstructure is idealized as a system with lumped masses concentrated at the floor levels, and coupled with the substructure. The substructure system, which comprises of the foundation and soil, is represented, and replaced by springs and dashpots. Frequency-dependent impedances of the foundation system are incorporated in the discrete model in terms of the springs and dashpots coefficients. The excitation applied to the model is field ground motions of actual earthquake records. Modal superposition principle is employed to transform the equations of motion in geometrical coordinates to modal coordinates. However, the modal equations remain coupled with respect to damping terms due to the difference in damping mechanisms of the superstructure and the soil. Hence, proportional damping for the coupled structural system may not be assumed. An iterative approach is adopted and programmed to solve the system of coupled equations of motion in modal coordinates to obtain the displacement responses of the system. Parametric studies for responses of building structures with regular and symmetric plans of different structural properties and heights are made for fixed and flexible base conditions, for different soil conditions encountered in Addis Ababa. The displacement, base shear and base overturning moments are used in the comparison of different types of structures for various foundation embedment depths, site conditions and height of structures. These values are compared against those of fixed base structure. The study shows that the flexible base structures, generally exhibit different responses from those structures with fixed base. Basically, the natural circular frequencies, the base shears and the inter-story displacements for the flexible base are less than those of the fixed base structures. This trend is particularly evident when the flexible soil has large thickness. In contrast, the trend becomes less predictable, when the thickness of the flexible soil decreases. Moreover, in the latter case, the iteration undulates significantly making the prediction difficult. This is attributed to the highly jagged nature of the impedance functions of frequencies for such formations. In this case, it is difficult to conclude whether the conventional fixed-base approach yields conservative design forces, as is the case for soil formations of large thickness.

Keywords: effect of soil structure, dynamic response corroborated, the modal superposition principle, parametric studies

Procedia PDF Downloads 31
463 Prognostic Significance of Nuclear factor kappa B (p65) among Breast Cancer Patients in Cape Coast Teaching Hospital

Authors: Precious Barnes, Abraham Mensah, Leonard Derkyi-Kwarteng, Benjamin Amoani, George Adjei, Ernest Adankwah, Faustina Pappoe, Kwabena Dankwah, Daniel Amoako-Sakyi, Samuel Victor Nuvor, Dorcas Obiri-Yeboah, Ewura Seidu Yahaya, Patrick Kafui Akakpo, Roland Osei Saahene

Abstract:

Context: Breast cancer is a prevalent and aggressive type of cancer among African women, with high mortality rates in Ghana. Nuclear factor kappa B (NF-kB) is a transcription factor that has been associated with tumor progression in breast cancer. However, there is a lack of published data on NF-kB in breast cancer patients in Ghana or other African countries. Research Aim: The aim of this study was to assess the prognostic significance of NF-kB (p65) expression and its association with various clinicopathological features in breast cancer patients at the Cape Coast Teaching Hospital in Ghana. Methodology: A total of 90 formalin-fixed breast cancer tissues and 15 normal breast tissues were used in this study. The expression level of NF-kB (p65) was examined using immunohistochemical techniques. Correlation analysis between NF-kB (p65) expression and clinicopathological features was performed using SPSS version 25. Findings: The study found that NF-kB (p65) was expressed in 86.7% of breast cancer tissues. There was a significant relationship between NF-kB (p65) expression and tumor grade, proliferation index (Ki67), and molecular subtype. High-level expression of NF-kB (p65) was more common in tumor grade 3 compared to grade 1, and Ki67 > 20 had higher expression of NF-kB (p65) compared to Ki67 ≤ 20. Triple-negative breast cancer patients had the highest overexpression of NF-kB (p65) compared to other molecular subtypes. There was no significant association between NF-kB (p65) expression and other clinicopathological parameters. Theoretical Importance: This study provides important insights into the expression of NF-kB (p65) in breast cancer patients in Ghana, particularly in relation to tumor grade and proliferation index. The findings suggest that NF-kB (p65) could serve as a potential biological marker for cancer stage, progression, prognosis and as a therapeutic target. Data Collection and Analysis Procedures: Formalin-fixed breast cancer tissues and normal breast tissues were collected and analyzed using immunohistochemical techniques. Correlation analysis between NF-kB (p65) expression and clinicopathological features was performed using SPSS version 25. Question Addressed: This study addressed the question of the prognostic significance of NF-kB (p65) expression and its association with clinicopathological features in breast cancer patients in Ghana. Conclusion: This study, the first of its kind in Ghana, demonstrates that NF-kB (p65) is highly expressed among breast cancer patients at the Cape Coast Teaching Hospital, especially in triple-negative breast cancer patients. The expression of NF-kB (p65) is associated with tumor grade and proliferation index. NF-kB (p65) could potentially serve as a biological marker for cancer stage, progression, prognosis, and as a therapeutic target.

Keywords: breast cancer, Ki67, NF-kB (p65), tumor grade

Procedia PDF Downloads 70
462 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 121
461 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 94
460 Examinations of Sustainable Protection Possibilities against Granary Weevil (Sitophilus granarius L.) on Stored Products

Authors: F. Pal-Fam, R. Hoffmann, S. Keszthelyi

Abstract:

Granary weevil, Sitophilus granarius (L.) (Col.: Curculionidae) is a typical cosmopolitan pest. It can cause significant damage to stored grains, and can drastically decrease yields. Damaged grain has reduced nutritional and market value, weaker germination, and reduced weight. The commonly used protectants against stored-product pests in Europe are residual insecticides, applied directly to the product. Unfortunately, these pesticides can be toxic to mammals, the residues can accumulate in the treated products, and many pest species could become resistant to the protectants. During recent years, alternative solutions of grain protection have received increased attention. These solutions are considered as the most promising alternatives to residual insecticides. The aims of our comparative study were to obtain information about the efficacies of the 1. diatomaceous earth, 2. sterile insect technology and 3. herbal oils against the S. granarius on grain (foremost maize), and to evaluate the influence of the dose rate on weevil mortality and progeny. The main results of our laboratory experiments are the followings: 1. Diatomaceous earth was especially efficacious against S. granarius, but its insecticidal properties depend on exposure time and applied dose. The efficacy on barley was better than on maize. Mortality value of the highest dose was 85% on the 21st day in the case of barley. It can be ascertained that complete elimination of progeny was evidenced on both gain types. To summarize, a satisfactory efficacy level was obtained only on barley at a rate of 4g/kg. Alteration of efficacy between grain types can be explained with differences in grain surface. 2. The mortality consequences of Roentgen irradiation on the S. granarius was highly influenced by the exposure time, and the dose applied. At doses of 50 and 70Gy, the efficacy accepted in plant protection (mortality: 95%) was recorded only on the 21st day. During the application of 100 and 200Gy doses, high mortality values (83.5% and 97.5%) were observed on the 14th day. Our results confirmed the complete sterilizing effect of the doses of 70Gy and above. The autocide effect of 50 and 70Gy doses were demonstrated when irradiated specimens were mixed into groups of fertile specimens. Consequently, these doses might be successfully applied to put sterile insect technique (SIT) into practice. 3. The results revealed that both studied essential oils (Callendula officinalis, Hippophae rhamnoides) exerted strong toxic effect on S. granarius, but C. officinalis triggered higher mortality. The efficacy (94.62 ± 2.63%) was reached after a 48 hours exposure to H. rhamnoides oil at 2ml/kg while the application of 2ml/kg of C. officinalis oil for 24 hours produced 98.94 ± 1.00% mortality rate. Mortality was 100% at 5 ml/kg of H. rhamnoides after 24 hours duration of its application, while with C. officinalis the same value could be reached after a 12 hour-exposure to the oil. Both essential oils applied were eliminated the progeny.

Keywords: Sitophilus granarius, stored product, protection, alternative solutions

Procedia PDF Downloads 169
459 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 33
458 A Stepped Care mHealth-Based Approach for Obesity with Type 2 Diabetes in Clinical Health Psychology

Authors: Gianluca Castelnuovo, Giada Pietrabissa, Gian Mauro Manzoni, Margherita Novelli, Emanuele Maria Giusti, Roberto Cattivelli, Enrico Molinari

Abstract:

Diabesity could be defined as a new global epidemic of obesity and being overweight with many complications and chronic conditions. Such conditions include not only type 2 diabetes, but also cardiovascular diseases, hypertension, dyslipidemia, hypercholesterolemia, cancer, and various psychosocial and psychopathological disorders. The financial direct and indirect burden (considering also the clinical resources involved and the loss of productivity) is a real challenge in many Western health-care systems. Recently the Lancet journal defined diabetes as a 21st-century challenge. In order to promote patient compliance in diabesity treatment reducing costs, evidence-based interventions to improve weight-loss, maintain a healthy weight, and reduce related comorbidities combine different treatment approaches: dietetic, nutritional, physical, behavioral, psychological, and, in some situations, pharmacological and surgical. Moreover, new technologies can provide useful solutions in this multidisciplinary approach, above all in maintaining long-term compliance and adherence in order to ensure clinical efficacy. Psychological therapies with diet and exercise plans could better help patients in achieving weight loss outcomes, both inside hospitals and clinical centers and during out-patient follow-up sessions. In the management of chronic diseases clinical psychology play a key role due to the need of working on psychological conditions of patients, their families and their caregivers. mHealth approach could overcome limitations linked with the traditional, restricted and highly expensive in-patient treatment of many chronic pathologies: one of the best up-to-date application is the management of obesity with type 2 diabetes, where mHealth solutions can provide remote opportunities for enhancing weight reduction and reducing complications from clinical, organizational and economic perspectives. A stepped care mHealth-based approach is an interesting perspective in chronic care management of obesity with type 2 diabetes. One promising future direction could be treating obesity, considered as a chronic multifactorial disease, using a stepped-care approach: -mhealth or traditional based lifestyle psychoeducational and nutritional approach. -health professionals-driven multidisciplinary protocols tailored for each patient. -inpatient approach with the inclusion of drug therapies and other multidisciplinary treatments. -bariatric surgery with psychological and medical follow-up In the chronic care management of globesity mhealth solutions cannot substitute traditional approaches, but they can supplement some steps in clinical psychology and medicine both for obesity prevention and for weight loss management.

Keywords: clinical health psychology, mhealth, obesity, type 2 diabetes, stepped care, chronic care management

Procedia PDF Downloads 342
457 Improving the Biocontrol of the Argentine Stem Weevil; Using the Parasitic Wasp Microctonus hyperodae

Authors: John G. Skelly, Peter K. Dearden, Thomas W. R. Harrop, Sarah N. Inwood, Joseph Guhlin

Abstract:

The Argentine stem weevil (ASW; L. bonariensis) is an economically important pasture pest in New Zealand, which causes about $200 million of damage per annum. Microctonus hyperodae (Mh), a parasite of the ASW in its natural range in South America, was introduced into New Zealand to curb the pasture damage caused by the ASW. Mh is an endoparasitic wasp that lays its eggs in the ASW halting its reproduction. Mh was initially successful at preventing ASW proliferation and reducing pasture damage. The effectiveness of Mh has since declined due to decreased parasitism rates and has resulted in increased pasture damage. Although the mechanism through which ASW has developed resistance to Mh has not been discovered, it has been proposed to be due to the different reproductive modes used by Mh and the ASW in New Zealand. The ASW reproduces sexually, whereas Mh reproduces asexually, which has been hypothesised to have allowed the ASW to ‘out evolve’ Mh. Other species within the Microctonus genus reproduce both sexually and asexually. Strains of Microctonus aethiopoides (Ma), a species closely related to Mh, reproduce either by sexual or asexual reproduction. Comparing the genomes of sexual and asexual Microctonus may allow for the identification of the mechanism of asexual reproduction and other characteristics that may improve Mh as a biocontrol agent. The genomes of Mh and three strains of Ma, two of which reproduce sexually and one reproduces asexually, have been sequenced and annotated. The French (MaFR) and Moroccan (MaMO) reproduce sexually, whereas the Irish strain (MaIR) reproduces asexually. Like Mh, The Ma strains are also used as biocontrol agents, but for different weevil species. The genomes of Mh and MaIR were subsequently upgraded using Hi-C, resulting in a set of high quality, highly contiguous genomes. A subset of the genes involved in mitosis and meiosis, which have been identified though the use of Hidden Markov Models generated from genes involved in these processes in other Hymenoptera, have been catalogued in Mh and the strains of Ma. Meiosis and mitosis genes were broadly conserved in both sexual and asexual Microctonus species. This implies that either the asexual species have retained a subset of the molecular components required for sexual reproduction or that the molecular mechanisms of mitosis and meiosis are different or differently regulated in Microctonus to other insect species in which these mechanisms are more broadly characterised. Bioinformatic analysis of the chemoreceptor compliment in Microctonus has revealed some variation in the number of olfactory receptors, which may be related to host preference. Phylogenetic analysis of olfactory receptors highlights variation, which may be able to explain different host range preferences in the Microctonus. Hi-C clustering implies that Mh has 12 chromosomes, and MaIR has 8. Hence there may be variation in gene regulation between species. Genome alignment of Mh and MaIR implies that there may be large scale genome structural variation. Greater insight into the genetics of these agriculturally important group of parasitic wasps may be beneficial in restoring or maintaining their biocontrol efficacy.

Keywords: argentine stem weevil, asexual, genomics, Microctonus hyperodae

Procedia PDF Downloads 154
456 Elucidation of Dynamics of Murine Double Minute 2 Shed Light on the Anti-cancer Drug Development

Authors: Nigar Kantarci Carsibasi

Abstract:

Coarse-grained elastic network models, namely Gaussian network model (GNM) and Anisotropic network model (ANM), are utilized in order to investigate the fluctuation dynamics of Murine Double Minute 2 (MDM2), which is the native inhibitor of p53. Conformational dynamics of MDM2 are elucidated in unbound, p53 bound, and non-peptide small molecule inhibitor bound forms. With this, it is aimed to gain insights about the alterations brought to global dynamics of MDM2 by native peptide inhibitor p53, and two small molecule inhibitors (HDM201 and NVP-CGM097) that are undergoing clinical stages in cancer studies. MDM2 undergoes significant conformational changes upon inhibitor binding, carrying pieces of evidence of induced-fit mechanism. Small molecule inhibitors examined in this work exhibit similar fluctuation dynamics and characteristic mode shapes with p53 when complexed with MDM2, which would shed light on the design of novel small molecule inhibitors for cancer therapy. The results showed that residues Phe 19, Trp 23, Leu 26 reside in the minima of slowest modes of p53, pointing to the accepted three-finger binding model. Pro 27 displays the most significant hinge present in p53 and comes out to be another functionally important residue. Three distinct regions are identified in MDM2, for which significant conformational changes are observed upon binding. Regions I (residues 50-77) and III (residues 90-105) correspond to the binding interface of MDM2, including (α2, L2, and α4), which are stabilized during complex formation. Region II (residues 77-90) exhibits a large amplitude motion, being highly flexible, both in the absence and presence of p53 or other inhibitors. MDM2 exhibits a scattered profile in the fastest modes of motion, while binding of p53 and inhibitors puts restraints on MDM2 domains, clearly distinguishing the kinetically hot regions. Mode shape analysis revealed that the α4 domain controls the size of the cleft by keeping the cleft narrow in unbound MDM2; and open in the bound states for proper penetration and binding of p53 and inhibitors, which points to the induced-fit mechanism of p53 binding. P53 interacts with α2 and α4 in a synchronized manner. Collective modes are shifted upon inhibitor binding, i.e., second mode characteristic motion in MDM2-p53 complex is observed in the first mode of apo MDM2; however, apo and bound MDM2 exhibits similar features in the softest modes pointing to pre-existing modes facilitating the ligand binding. Although much higher amplitude motions are attained in the presence of non-peptide small molecule inhibitor molecules as compared to p53, they demonstrate close similarity. Hence, NVP-CGM097 and HDM201 succeed in mimicking the p53 behavior well. Elucidating how drug candidates alter the MDM2 global and conformational dynamics would shed light on the rational design of novel anticancer drugs.

Keywords: cancer, drug design, elastic network model, MDM2

Procedia PDF Downloads 128
455 Effect of Graded Level of Nano Selenium Supplementation on the Performance of Broiler Chicken

Authors: Raj Kishore Swain, Kamdev Sethy, Sumanta Kumar Mishra

Abstract:

Selenium is an essential trace element for the chicken with a variety of biological functions like growth, fertility, immune system, hormone metabolism, and antioxidant defense systems. Selenium deficiency in chicken causes exudative diathesis, pancreatic dystrophy and nutritional muscle dystrophy of the gizzard, heart and skeletal muscle. Additionally, insufficient immunity, lowering of production ability, decreased feathering of chickens and increased embryo mortality may occur due to selenium deficiency. Nano elemental selenium, which is bright red, highly stable, soluble and of nano meter size in the redox state of zero, has high bioavailability and low toxicity due to the greater surface area, high surface activity, high catalytic efficiency and strong adsorbing ability. To assess the effect of dietary nano-Se on performance and expression of gene in Vencobb broiler birds in comparison to its inorganic form (sodium selenite), four hundred fifty day-old Vencobb broiler chicks were randomly distributed into 9 dietary treatment groups with two replicates with 25 chicks per replicate. The dietary treatments were: T1 (Control group): Basal diet; T2: Basal diet with 0.3 ppm of inorganic Se; T3: Basal diet with 0.01875 ppm of nano-Se; T4: Basal diet with 0.0375 ppm of nano-Se; T5: Basal diet with 0.075 ppm of nano-Se, T6: Basal diet with 0.15 ppm of nano-Se, T7: Basal diet with 0.3 ppm of nano-Se, T8: Basal diet with 0.60 ppm of nano-Se, T9: Basal diet with 1.20 ppm of nano-Se. Nano selenium was synthesized by mixing sodium selenite with reduced glutathione and bovine serum albumin. The experiment was carried out in two phases: starter phase (0-3 wks), finisher phase (4-5 wk) in deep litter system. The body weight at the 5th week was best observed in T4. The best feed conversion ratio at the end of 5th week was observed in T4. Erythrocytic catalase, glutathione peroxidase and superoxide dismutase activity were significantly (P < 0.05) higher in all the nano selenium treated groups at 5th week. The antibody titers (log2) against Ranikhet diseases vaccine immunization of 5th-week broiler birds were significantly higher (P < 0.05) in the treatments T4 to T7. The selenium levels in liver, breast, kidney, brain, and gizzard were significantly (P < 0.05) increased with increasing dietary nano-Se indicating higher bioavailability of nano-Se compared to inorganic Se. The real time polymer chain reaction analysis showed an increase in the expression of antioxidative gene in T4 and T7 group. Therefore, it is concluded that supplementation of nano-selenium at 0.0375 ppm over and above the basal level can improve the body weight, antioxidant enzyme activity, Se bioavailability and expression of the antioxidative gene in broiler birds.

Keywords: chicken, growth, immunity, nano selenium

Procedia PDF Downloads 175
454 Investigating Seasonal Changes of Urban Land Cover with High Spatio-Temporal Resolution Satellite Data via Image Fusion

Authors: Hantian Wu, Bo Huang, Yuan Zeng

Abstract:

Divisions between wealthy and poor, private and public landscapes are propagated by the increasing economic inequality of cities. While these are the spatial reflections of larger social issues and problems, urban design can at least employ spatial techniques that promote more inclusive rather than exclusive, overlapping rather than segregated, interlinked rather than disconnected landscapes. Indeed, the type of edge or border between urban landscapes plays a critical role in the way the environment is perceived. China experiences rapid urbanization, which poses unpredictable environmental challenges. The urban green cover and water body are under changes, which highly relevant to resident wealth and happiness. However, very limited knowledge and data on their rapid changes are available. In this regard, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understating the driving forces of urban landscape changes can be a significant contribution for urban planning and studying. High-resolution remote sensing data has been widely applied to urban management in China. The map of urban land use map for the entire China of 2018 with 10 meters resolution has been published. However, this research focuses on the large-scale and high-resolution remote sensing land use but does not precisely focus on the seasonal change of urban covers. High-resolution remote sensing data has a long-operation cycle (e.g., Landsat 8 required 16 days for the same location), which is unable to satisfy the requirement of monitoring urban-landscape changes. On the other hand, aerial-remote or unmanned aerial vehicle (UAV) sensing are limited by the aviation-regulation and cost was hardly widely applied in the mega-cities. Moreover, those data are limited by the climate and weather conditions (e.g., cloud, fog), and those problems make capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Particularly, during the rainy season, no data are available even for Sentinel Satellite data with 5 days interval. Many natural events and/or human activities drive the changes of urban covers. In this case, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understanding the mechanism of urban landscape changes can be a significant contribution for urban planning and studying. This project aims to use the high spatiotemporal fusion of remote sensing data to create short-cycle, high-resolution remote sensing data sets for exploring the high-frequently urban cover changes. This research will enhance the long-term monitoring applicability of high spatiotemporal fusion of remote sensing data for the urban landscape for optimizing the urban management of landscape border to promoting the inclusive of the urban landscape to all communities.

Keywords: urban land cover changes, remote sensing, high spatiotemporal fusion, urban management

Procedia PDF Downloads 124
453 Good Governance Complementary to Corruption Abatement: A Cross-Country Analysis

Authors: Kamal Ray, Tapati Bhattacharya

Abstract:

Private use of public office for private gain could be a tentative definition of corruption and most distasteful event of corruption is that it is not there, nor that it is pervasive, but it is socially acknowledged in the global economy, especially in the developing nations. We attempted to assess the interrelationship between the Corruption perception index (CPI) and the principal components of governance indicators as per World Bank like Control of Corruption (CC), rule of law (RL), regulatory quality (RQ) and government effectiveness (GE). Our empirical investigation concentrates upon the degree of reflection of governance indicators upon the CPI in order to single out the most powerful corruption-generating indicator in the selected countries. We have collected time series data on above governance indicators such as CC, RL, RQ and GE of the selected eleven countries from the year of 1996 to 2012 from World Bank data set. The countries are USA, UK, France, Germany, Greece, China, India, Japan, Thailand, Brazil, and South Africa. Corruption Perception Index (CPI) of the countries mentioned above for the period of 1996 to 2012is also collected. Graphical method of simple line diagram against the time series data on CPI is applied for quick view for the relative positions of different trend lines of different nations. The correlation coefficient is enough to assess primarily the degree and direction of association between the variables as we get the numerical data on governance indicators of the selected countries. The tool of Granger Causality Test (1969) is taken into account for investigating causal relationships between the variables, cause and effect to speak of. We do not need to verify stationary test as length of time series is short. Linear regression is taken as a tool for quantification of a change in explained variables due to change in explanatory variable in respect of governance vis a vis corruption. A bilateral positive causal link between CPI and CC is noticed in UK, index-value of CC increases by 1.59 units as CPI increases by one unit and CPI rises by 0.39 units as CC rises by one unit, and hence it has a multiplier effect so far as reduction in corruption is concerned in UK. GE causes strongly to the reduction of corruption in UK. In France, RQ is observed to be a most powerful indicator in reducing corruption whereas it is second most powerful indicator after GE in reducing of corruption in Japan. Governance-indicator like GE plays an important role to push down the corruption in Japan. In China and India, GE is proactive as well as influencing indicator to curb corruption. The inverse relationship between RL and CPI in Thailand indicates that ongoing machineries related to RL is not complementary to the reduction of corruption. The state machineries of CC in S. Africa are highly relevant to reduce the volume of corruption. In Greece, the variations of CPI positively influence the variations of CC and the indicator like GE is effective in controlling corruption as reflected by CPI. All the governance-indicators selected so far have failed to arrest their state level corruptions in USA, Germany and Brazil.

Keywords: corruption perception index, governance indicators, granger causality test, regression

Procedia PDF Downloads 303
452 MOF [(4,4-Bipyridine)₂(O₂CCH₃)₂Zn]N as Heterogeneous Acid Catalysts for the Transesterification of Canola Oil

Authors: H. Arceo, S. Rincon, C. Ben-Youssef, J. Rivera, A. Zepeda

Abstract:

Biodiesel has emerged as a material with great potential as a renewable energy replacement to current petroleum-based diesel. Recently, biodiesel production is focused on the development of more efficient, sustainable process with lower costs of production. In this sense, a “green” approach to biodiesel production has stimulated the use of sustainable heterogeneous acid catalysts, that are better alternatives to conventional processes because of their simplicity and the simultaneous promotion of esterification and transesterification reactions from low-grade, highly-acidic and water containing oils without the formation of soap. The focus of this methodology is the development of new heterogeneous catalysts that under ordinary reaction conditions could reach yields similar to homogeneous catalysis. In recent years, metal organic frameworks (MOF) have attracted much interest for their potential as heterogeneous acid catalysts. They are crystalline porous solids formed by association of transition metal ions or metal–oxo clusters and polydentate organic ligands. This hybridization confers MOFs unique features such as high thermal stability, larger pore size, high specific area, high selectivity and recycling potential. Thus, MOF application could be a way to improve the biodiesel production processes. In this work, we evaluated the catalytic activity of MOF [(4,4-bipyridine)2(O₂CCH₃)2Zn]n (MOF Zn-I) for the synthesis of biodiesel from canola oil. The reaction conditions were optimized using the response surface methodology with a compound design central with 24. The variables studied were: Reaction temperature, amount of catalyst, molar ratio oil: MetOH and reaction time. The preparation MOF Zn-I was performed by mixing 5 mmol 4´4 dipyridine dissolved in 25 mL methanol with 10 mmol Zn(O₂CCH₃)₂ ∙ 2H₂O in 25 mL water. The crystals were obtained by slow evaporation of the solvents at 60°C for 18 h. The prepared catalyst was characterized using X-ray diffraction (XRD) and Fourier transform infrared spectrometer (FT-IR). The prepared catalyst was characterized using X-ray diffraction (XRD) and Fourier transform infrared spectrometer (FT-IR). Experiments were performed using commercially available canola oil in ace pressure tube under continuous stirring. The reaction was filtered and vacuum distilled to remove the catalyst and excess alcohol, after which it was centrifuged to separate the obtained biodiesel and glycerol. 1H NMR was used to calculate the process yield. GC-MS was used to quantify the fatty acid methyl ester (FAME). The results of this study show that the acid catalyst MOF Zn-I could be used as catalyst for biodiesel production through heterogeneous transesterification of canola oil with FAME yield 82 %. The optimum operating condition for the catalytic reaction were of 142°C, 0.5% catalyst/oil weight ratio, 1:30 oil:MeOH molar ratio and 5 h reaction time.

Keywords: fatty acid methyl ester, heterogeneous acid catalyst, metal organic framework, transesterification

Procedia PDF Downloads 278
451 Developing Motorized Spectroscopy System for Tissue Scanning

Authors: Tuba Denkceken, Ayse Nur Sarı, Volkan Ihsan Tore, Mahmut Denkceken

Abstract:

The aim of the presented study was to develop a newly motorized spectroscopy system. Our system is composed of probe and motor parts. The probe part consists of bioimpedance and fiber optic components that include two platinum wires (each 25 micrometer in diameter) and two fiber cables (each 50 micrometers in diameter) respectively. Probe was examined on tissue phantom (polystyrene microspheres with different diameters). In the bioimpedance part of the probe current was transferred to the phantom and conductivity information was obtained. Adjacent two fiber cables were used in the fiber optic part of the system. Light was transferred to the phantom by fiber that was connected to the light source and backscattered light was collected with the other adjacent fiber for analysis. It is known that the nucleus expands and the nucleus-cytoplasm ratio increases during the cancer progression in the cell and this situation is one of the most important criteria for evaluating the tissue for pathologists. The sensitivity of the probe to particle (nucleus) size in phantom was tested during the study. Spectroscopic data obtained from our system on phantom was evaluated by multivariate statistical analysis. Thus the information about the particle size in the phantom was obtained. Bioimpedance and fiber optic experiments results which were obtained from polystyrene microspheres showed that the impedance value and the oscillation amplitude were increasing while the size of particle was enlarging. These results were compatible with the previous studies. In order to motorize the system within the motor part, three driver electronic circuits were designed primarily. In this part, supply capacitors were placed symmetrically near to the supply inputs which were used for balancing the oscillation. Female capacitors were connected to the control pin. Optic and mechanic switches were made. Drivers were structurally designed as they could command highly calibrated motors. It was considered important to keep the drivers’ dimension as small as we could (4.4x4.4x1.4 cm). Then three miniature step motors were connected to each other along with three drivers. Since spectroscopic techniques are quantitative methods, they yield more objective results than traditional ones. In the future part of this study, it is planning to get spectroscopic data that have optic and impedance information from the cell culture which is normal, low metastatic and high metastatic breast cancer. In case of getting high sensitivity in differentiated cells, it might be possible to scan large surface tissue areas in a short time with small steps. By means of motorize feature of the system, any region of the tissue will not be missed, in this manner we are going to be able to diagnose cancerous parts of the tissue meticulously. This work is supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK) through 3001 project (115E662).

Keywords: motorized spectroscopy, phantom, scanning system, tissue scanning

Procedia PDF Downloads 190
450 Drivers of Satisfaction and Dissatisfaction in Camping Tourism: A Case Study from Croatia

Authors: Darko Prebežac, Josip Mikulić, Maja Šerić, Damir Krešić

Abstract:

Camping tourism is recognized as a growing segment of the broader tourism industry, currently evolving from an inexpensive, temporary sojourn in a rural environment into a highly fragmented niche tourism sector. The trends among public-managed campgrounds seem to be moving away from rustic campgrounds that provide only a tent pad and a fire ring to more developed facilities that offer a range of different amenities, where campers still search for unique experiences that go above the opportunity to experience nature and social interaction. In addition, while camping styles and options changed significantly over the last years, coastal camping in particular became valorized as is it regarded with a heightened sense of nostalgia. Alongside this growing interest in the camping tourism, a demand for quality servicing infrastructure emerged in order to satisfy the wide variety of needs, wants, and expectations of an increasingly demanding traveling public. However, camping activity in general and quality of camping experience and campers’ satisfaction in particular remain an under-researched area of the tourism and consumption behavior literature. In this line, very few studies addressed the issue of quality product/service provision in satisfying nature based tourists and in driving their future behavior with respect to potential re-visitation and recommendation intention. The present study thus aims to investigate the drivers of positive and negative campsite experience using the case of Croatia. Due to the well-preserved nature and indented coastline, camping tourism has a long tradition in Croatia and represents one of the most important and most developed tourism products. During the last decade the number of tourist overnights in Croatian camps has increased by 26% amounting to 16.5 million in 2014. Moreover, according to Eurostat the market share of campsites in the EU is around 14%, indicating that the market share of Croatian campsites is almost double large compared to the EU average. Currently, there are a total of 250 camps in Croatia with approximately 75.8 thousands accommodation units. It is further noteworthy that Croatian camps have higher average occupancy rates and a higher average length of stay as compared to the national average of all types of accommodation. In order to explore the main drivers of positive and negative campsite experiences, this study uses principal components analysis (PCA) and an impact-asymmetry analysis (IAA). Using the PCA, first the main dimensions of the campsite experience are extracted in an exploratory manner. Using the IAA, the extracted factors are investigated for their potentials to create customer delight and/or frustration. The results provide valuable insight to both researchers and practitioners regarding the understanding of campsite satisfaction.

Keywords: Camping tourism, campsite, impact-asymmetry analysis, satisfaction

Procedia PDF Downloads 186
449 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings

Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof

Abstract:

False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.

Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes

Procedia PDF Downloads 305
448 Research Project on Learning Rationality in Strategic Behaviors: Interdisciplinary Educational Activities in Italian High Schools

Authors: Giovanna Bimonte, Luigi Senatore, Francesco Saverio Tortoriello, Ilaria Veronesi

Abstract:

The education process considers capabilities not only to be seen as a means to a certain end but rather as an effective purpose. Sen's capability approach challenges human capital theory, which sees education as an ordinary investment undertaken by individuals. A complex reality requires complex thinking capable of interpreting the dynamics of society's changes to be able to make decisions that can be rational for private, ethical and social contexts. Education is not something removed from the cultural and social context; it exists and is structured within it. In Italy, the "Mathematical High School Project" is a didactic research project is based on additional laboratory courses in extracurricular hours where mathematics intends to bring itself in a dialectical relationship with other disciplines as a cultural bridge between the two cultures, the humanistic and the scientific ones, with interdisciplinary educational modules on themes of strong impact in younger life. This interdisciplinary mathematics presents topics related to the most advanced technologies and contemporary socio-economic frameworks to demonstrate how mathematics is not only a key to reading but also a key to resolving complex problems. The recent developments in mathematics provide the potential for profound and highly beneficial changes in mathematics education at all levels, such as in socio-economic decisions. The research project is built to investigate whether repeated interactions can successfully promote cooperation among students as rational choice and if the skill, the context and the school background can influence the strategies choice and the rationality. A Laboratory on Game Theory as mathematical theory was conducted in the 4th year of the Mathematical High Schools and in an ordinary scientific high school of the Scientific degree program. Students played two simultaneous games of repeated Prisoner's Dilemma with an indefinite horizon, with two different competitors in each round; even though the competitors in each round will remain the same for the duration of the game. The results highlight that most of the students in the two classes used the two games with an immunization strategy against the risk of losing: in one of the games, they started by playing Cooperate, and in the other by the strategy of Compete. In the literature, theoretical models and experiments show that in the case of repeated interactions with the same adversary, the optimal cooperation strategy can be achieved by tit-for-tat mechanisms. In higher education, individual capacities cannot be examined independently, as conceptual framework presupposes a social construction of individuals interacting and competing, making individual and collective choices. The paper will outline all the results of the experimentation and the future development of the research.

Keywords: game theory, interdisciplinarity, mathematics education, mathematical high school

Procedia PDF Downloads 73
447 Internet Memes as Meaning-Making Tools within Subcultures: A Case Study of Lolita Fashion

Authors: Victoria Esteves

Abstract:

Online memes have not only impacted different aspects of culture, but they have also left their mark on particular subcultures, where memes have reflected issues and debates surrounding specific spheres of interest. This is the first study that outlines how memes can address cultural intersections within the Lolita fashion community, which are much more specific and which fall outside of the broad focus of politics and/or social commentary. This is done by looking at the way online memes are used in this particular subculture as a form of meaning-making and group identity reinforcement, demonstrating not only the adaptability of online memes to specific cultural groups but also how subcultures tailor these digital objects to discuss both community-centered topics and more broad societal aspects. As part of an online ethnography, this study focuses on qualitative content analysis by taking a look at some of the meme communication that has permeated Lolita fashion communities. Examples of memes used in this context are picked apart in order to understand this specific layered phenomenon of communication, as well as to gain insights into how memes can operate as visual shorthand for the remix of meaning-making. There are existing parallels between internet culture and cultural behaviors surrounding Lolita fashion: not only is the latter strongly influenced by the former (due to its highly globalized dispersion and lack of physical shops, Lolita fashion is almost entirely reliant on the internet for its existence), both also emphasize curatorial roles through a careful collaborative process of documenting significant aspects of their culture (e.g., Know Your Meme and Lolibrary). Further similarities appear when looking at ideas of inclusion and exclusion that permeate both cultures, where memes and language are used in order to both solidify group identity and to police those who do not ascribe to these cultural tropes correctly, creating a feedback loop that reinforces subcultural ideals. Memes function as excellent forms of communication within the Lolita community because they reinforce its coded ideas and allows a kind of participation that echoes other cultural groups that are online-heavy such as fandoms. Furthermore, whilst the international Lolita community was mostly self-contained within its LiveJournal birthplace, it has become increasingly dispersed through an array of different social media groups that have fragmented this subculture significantly. The use of memes is key in maintaining a sense of connection throughout this now fragmentary experience of fashion. Memes are also used in the Lolita fashion community to bridge the gap between Lolita fashion related community issues and wider global topics; these reflect not only an ability to make use of a broader online language to address specific issues of the community (which in turn provide a very community-specific engagement with remix practices) but also memes’ ability to be tailored to accommodate overlapping cultural and political concerns and discussions between subcultures and broader societal groups. Ultimately, online memes provide the necessary elasticity to allow their adaption and adoption by subcultural groups, who in turn use memes to extend their meaning-making processes.

Keywords: internet culture, Lolita fashion, memes, online community, remix

Procedia PDF Downloads 167
446 Transient Heat Transfer: Experimental Investigation near the Critical Point

Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut

Abstract:

In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.

Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators

Procedia PDF Downloads 218
445 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 234