Search results for: brain cooling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2049

Search results for: brain cooling

219 Modeling Aerosol Formation in an Electrically Heated Tobacco Product

Authors: Markus Nordlund, Arkadiusz K. Kuczaj

Abstract:

Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.

Keywords: aerosol, classical nucleation theory (CNT), electrically heated tobacco product (EHTP), electrically heated tobacco system (EHTS), modeling, multicomponent, nucleation

Procedia PDF Downloads 241
218 Sustainable Technology and the Production of Housing

Authors: S. Arias

Abstract:

New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.

Keywords: building regulations, housing, sustainability, technology

Procedia PDF Downloads 328
217 Inhibition of Glutamate Carboxypeptidase Activity Protects Retinal Ganglionic Cell Death Induced by Ischemia-Reperfusion by Reducing the Astroglial Activation in Rat

Authors: Dugeree Otgongerel, Kyong Jin Cho, Yu-Han Kim, Sangmee Ahn Jo

Abstract:

Excessive activation of glutamate receptor is thought to be involved in retinal ganglion cell (RGC) death after ischemia- reperfusion damage. Glutamate carboxypeptidase II (GCPII) is an enzyme responsible for the synthesis of glutamate. Several studies showed that inhibition of GCPII prevents or reduces cellular damage in brain diseases. Thus, in this study, we examined the expression of GCPII in rat retina and the role of GCPII in acute high IOP ischemia-reperfusion damage of eye by using a GCPII inhibitor, 2-(phosphonomethyl) pentanedioic acid (2-PMPA). Animal model of ischemia-reperfusion was induced by raising the intraocular pressure for 60 min and followed by reperfusion for 3 days. Rats were randomly divided into four groups: either intra-vitreous injection of 2-PMPA (11 or 110 ng per eye) or PBS after ischemia-reperfusion, 2-PMPA treatment without ischemia-reperfusion and sham-operated normal control. GCPII immunoreactivity in normal rat retina was detected weakly in retinal nerve fiber layer (RNFL) and retinal ganglionic cell layer (RGL) and also inner plexiform layer (IPL) and outer plexiform layer (OPL) strongly where are co-stained with an anti-GFAP antibody, suggesting that GCPII is expressed mostly in Muller and astrocytes. Immunostaining with anti-BRN antibody showed that ischemia- reperfusion caused RGC death (31.5 %) and decreased retinal thickness in all layers of damaged retina, but the treatment of 2-PMPA twice at 0 and 48 hour after reperfusion blocked these retinal damages. GCPII level in RNFL layer was enhanced after ischemia-reperfusion but was blocked by PMPA treatment. This result was confirmed by western blot analysis showing that the level of GCPII protein after ischemia- reperfusion increased by 2.2- fold compared to control, but this increase was blocked almost completely by 110 ng PMPA treatment. Interestingly, GFAP immunoreactivity in the retina after ischemia- reperfusion followed by treatment with PMPA showed similar pattern to GCPII, increase after ischemia-reperfusion but reduction to the normal level by PMPA treatment. Our data demonstrate that increase of GCPII protein level after ischemia-reperfusion injury is likely to cause glial activation and/or retinal cell death which are mediated by glutamate, and GCPII inhibitors may be useful in treatment of retinal disorders in which glutamate excitotoxicity is pathogenic.

Keywords: glutamate carboxypepptidase II, glutamate excitotoxicity, ischemia-reperfusion, retinal ganglion cell

Procedia PDF Downloads 323
216 Comprehensive Approach to Control Virus Infection and Energy Consumption in An Occupant Classroom

Authors: SeyedKeivan Nateghi, Jan Kaczmarczyk

Abstract:

People nowadays spend most of their time in buildings. Accordingly, maintaining a good quality of indoor air is very important. New universal matters related to the prevalence of Covid-19 also highlight the importance of indoor air conditioning in reducing the risk of virus infection. Cooling and Heating of a house will provide a suitable zone of air temperature for residents. One of the significant factors in energy demand is energy consumption in the building. In general, building divisions compose more than 30% of the world's fundamental energy requirement. As energy demand increased, greenhouse effects emerged that caused global warming. Regardless of the environmental damage to the ecosystem, it can spread infectious diseases such as malaria, cholera, or dengue to many other parts of the world. With the advent of the Covid-19 phenomenon, the previous instructions to reduce energy consumption are no longer responsive because they increase the risk of virus infection among people in the room. Two problems of high energy consumption and coronavirus infection are opposite. A classroom with 30 students and one teacher in Katowice, Poland, considered controlling two objectives simultaneal. The probability of transmission of the disease is calculated from the carbon dioxide concentration of people. Also, in a certain period, the amount of energy consumption is estimated by EnergyPlus. The effect of three parameters of number, angle, and time or schedule of opening windows on the probability of infection transmission and energy consumption of the class were investigated. Parameters were examined widely to determine the best possible condition for simultaneous control of infection spread and energy consumption. The number of opening windows is discrete (0,3), and two other parameters are continuous (0,180) and (8 AM, 2 PM). Preliminary results show that changes in the number, angle, and timing of window openings significantly impact the likelihood of virus transmission and class energy consumption. The greater the number, tilt, and timing of window openings, the less likely the student will transmit the virus. But energy consumption is increasing. When all the windows were closed at all hours of the class, the energy consumption for the first day of January was only 0.2 megajoules. In comparison, the probability of transmitting the virus per person in the classroom is more than 45%. But when all windows were open at maximum angles during class, the chance of transmitting the infection was reduced to 0.35%. But the energy consumption will be 36 megajoules. Therefore, school classrooms need an optimal schedule to control both functions. In this article, we will present a suitable plan for the classroom with natural ventilation through windows to control energy consumption and the possibility of infection transmission at the same time.

Keywords: Covid-19, energy consumption, building, carbon dioxide, energyplus

Procedia PDF Downloads 75
215 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 56
214 Blood Ketones as a Point of Care Testing in Paediatric Emergencies

Authors: Geetha Jayapathy, Lakshmi Muthukrishnan, Manoj Kumar Reddy Pulim , Radhika Raman

Abstract:

Introduction: Ketones are the end products of fatty acid metabolism and a source of energy for vital organs such as the brain, heart and skeletal muscles. Ketones are produced in excess when glucose is not available as a source of energy or it cannot be utilized as in diabetic ketoacidosis. Children admitted in the emergency department often have starvation ketosis which is not clinically manifested. Decision on admission of children to the emergency room with subtle signs can be difficult at times. Point of care blood ketone testing can be done at the bedside even in a primary level care setting to supplement and guide us in our management decisions. Hence this study was done to explore the utility of this simple bedside parameter as a supplement in assessing pediatric patients presenting to the emergency department. Objectives: To estimate blood ketones of children admitted in the emergency department. To analyze the significance of blood ketones in various disease conditions. Methods: Blood ketones using point of care testing instrument (ABOTTprecision Xceed Pro meters) was done in patients getting admitted in emergency room and in out-patients (through sample collection centre). Study population: Children aged 1 month to 18 years were included in the study. 250 cases (In-patients) and 250 controls (out-patients) were collected. Study design: Prospective observational study. Data on details of illness and physiological status were documented. Blood ketones were compared between the two groups and all in patients were categorized into various system groups and analysed. Results: Mean blood ketones were high in in-patients ranging from 0 to 7.2, with a mean of 1.28 compared to out-patients ranging from 0 to 1.9 with a mean of 0.35. This difference was statistically significant with a p value < 0.001. In-patients with shock (mean of 4.15) and diarrheal dehydration (mean of 1.85) had a significantly higher blood ketone values compared to patients with other system involvement. Conclusion: Blood ketones were significantly high (above the normal range) in pediatric patients who are sick requiring admission. Patients with various forms of shock had very high blood ketone values as found in diabetic ketoacidosis. Ketone values in diarrheal dehydration were moderately high correlating to the degree of dehydration.

Keywords: admission, blood ketones, paediatric emergencies, point of care testing

Procedia PDF Downloads 187
213 Identification and Characterization of in Vivo, in Vitro and Reactive Metabolites of Zorifertinib Using Liquid Chromatography Lon Trap Mass Spectrometry

Authors: Adnan A. Kadi, Nasser S. Al-Shakliah, Haitham Al-Rabiah

Abstract:

Zorifertinib is a novel, potent, oral, a small molecule used to treat non-small cell lung cancer (NSCLC). zorifertinib is an Epidermal Growth Factor Receptor (EGFR) inhibitor and has good blood–brain barrier permeability for (NSCLC) patients with EGFR mutations. zorifertinibis currently at phase II/III clinical trials. The current research reports the characterization and identification of in vitro, in vivo and reactive intermediates of zorifertinib. Prediction of susceptible sites of metabolism and reactivity pathways (cyanide and GSH) of zorifertinib were performed by the Xenosite web predictor tool. In-vitro metabolites of zorifertinib were performed by incubation with rat liver microsomes (RLMs) and isolated perfused rat liver hepatocytes. Extraction of zorifertinib and it's in vitro metabolites from the incubation mixtures were done by protein precipitation. In vivo metabolism was done by giving a single oral dose of zorifertinib(10 mg/Kg) to Sprague Dawely rats in metabolic cages by using oral gavage. Urine was gathered and filtered at specific time intervals (0, 6, 12, 18, 24, 48, 72,96and 120 hr) from zorifertinib dosing. A similar volume of ACN was added to each collected urine sample. Both layers (organic and aqueous) were injected into liquid chromatography ion trap mass spectrometry(LC-IT-MS) to detect vivozorifertinib metabolites. N-methyl piperizine ring and quinazoline group of zorifertinib undergoe metabolism forming iminium and electro deficient conjugated system respectively, which are very reactive toward nucleophilic macromolecules. Incubation of zorifertinib with RLMs in the presence of 1.0 mM KCN and 1.0 Mm glutathione were made to check reactive metabolites as it is often responsible for toxicities associated with this drug. For in vitro metabolites there were nine in vitro phase I metabolites, four in vitro phase II metabolites, eleven reactive metabolites(three cyano adducts, five GSH conjugates metabolites, and three methoxy metabolites of zorifertinib were detected by LC-IT-MS. For in vivo metabolites, there were eight in vivo phase I, tenin vivo phase II metabolitesofzorifertinib were detected by LC-IT-MS. In vitro and in vivo phase I metabolic pathways wereN- demthylation, O-demethylation, hydroxylation, reduction, defluorination, and dechlorination. In vivo phase II metabolic reaction was direct conjugation of zorifertinib with glucuronic acid and sulphate.

Keywords: in vivo metabolites, in vitro metabolites, cyano adducts, GSH conjugate

Procedia PDF Downloads 173
212 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 297
211 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 159
210 Exploration of Probiotics and Anti-Microbial Agents in Fermented Milk from Pakistani Camel spp. Breeds

Authors: Deeba N. Baig, Ateeqa Ijaz, Saloome Rafiq

Abstract:

Camel is a religious and culturally significant animal in Asian and African regions. In Pakistan Dromedary and Bactrian are common camel breeds. Other than the transportation use, it is a pivotal source of milk and meat. The quality of its milk and meat is predominantly dependent on the geographical location and variety of vegetation available for the diet. Camel milk (CM) is highly nutritious because of its reduced cholesterol and sugar contents along with enhanced minerals and vitamins level. The absence of beta-lactoglobulin (like human milk), makes CM a safer alternative for infants and children having Cow Milk Allergy (CMA). In addition to this, it has a unique probiotic profile both in raw and fermented form. Number of Lactic acid bacteria (LAB) including lactococcus, lactobacillus, enterococcus, streptococcus, weissella, pediococcus and many other bacteria have been detected. From these LAB Lactobacilli, Bifidobacterium and Enterococcus are widely used commercially for fermentation purpose. CM has high therapeutic value as its effectiveness is known against various ailments like fever, arthritis, asthma, gastritis, hepatitis, Jaundice, constipation, postpartum care of women, anti-venom, dropsy etc. It also has anti-diabetic, anti-microbial, antitumor potential along with its robust efficacy in the treatment of auto-immune disorders. Recently, the role of CM has been explored in brain-gut axis for the therapeutics of neurodevelopmental disorders. In this connection, a lot of grey area was available to explore the probiotics and therapeutics latent in the CM available in Pakistan. Thus, current study was designed to explore the predominant probiotic flora and antimicrobial potential of CM from different local breeds of Pakistan. The probiotics have been identified through biochemical, physiological and ribo-typing methods. In addition to this, bacteriocins (antimicrobial-agents) were screened through PCR-based approach. Results of this study revealed that CM from different breeds of camel depicted a number of similar probiotic candidates along with the range of limited variability. However, the nucleotide sequence analysis of selected anti-listerial bacteriocins exposed least variability. As a conclusion, the CM has sufficient probiotic availability and significant anti-microbial potential.

Keywords: bacteriocins, camel milk, probiotics potential, therapeutics

Procedia PDF Downloads 100
209 Overcoming Obstacles in UHTHigh-protein Whey Beverages by Microparticulation Process: Scientific and Technological Aspects

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh, Seyed Jalal Razavi Zahedkolaei

Abstract:

Herein, a shelf stable (no refrigeration required) UHT processed, aseptically packaged whey protein drink was formulated by using a new strategy in microparticulate process. Applying thermal and two-dimensional mechanical treatments simultaneously, a modified protein (MWPC-80) was produced. Then the physical, thermal and thermodynamic properties of MWPC-80 were assessed using particle size analysis, dynamic temperature sweep (DTS), and differential scanning calorimetric (DSC) tests. Finally, using MWPC-80, a new RTD beverage was formulated, and shelf stability was assessed for three months at ambient temperature (25 °C). Non-isothermal dynamic temperature sweep was performed, and the results were analyzed by a combination of classic rate equation, Arrhenius equation, and time-temperature relationship. Generally, results showed that temperature dependency of the modified sample was significantly (Pvalue<0.05) less than the control one contained WPC-80. The changes in elastic modulus of the MWPC did not show any critical point at all the processed stages, whereas, the control sample showed two critical points during heating (82.5 °C) and cooling (71.10 °C) stages. Thermal properties of samples (WPC-80 & MWPC-80) were assessed using DSC with 4 °C /min heating speed at 20-90 °C heating range. Results did not show any thermal peak in MWPC DSC curve, which suggested high thermal resistance. On the other hands, WPC-80 sample showed a significant thermal peak with thermodynamic properties of ∆G:942.52 Kj/mol ∆H:857.04 Kj/mole and ∆S:-1.22Kj/mole°K. Dynamic light scattering was performed and results showed 0.7 µm and 15 nm average particle size for MWPC-80 and WPC-80 samples, respectively. Moreover, particle size distribution of MWPC-80 and WPC-80 were Gaussian-Lutresian and normal, respectively. After verification of microparticulation process by DTS, PSD and DSC analyses, a 10% why protein beverage (10% w/w/ MWPC-80, 0.6% w/w vanilla flavoring agent, 0.1% masking flavor, 0.05% stevia natural sweetener and 0.25% citrate buffer) was formulated and UHT treatment was performed at 137 °C and 4 s. Shelf life study did not show any jellification or precipitation of MWPC-80 contained beverage during three months storage at ambient temperature, whereas, WPC-80 contained beverage showed significant precipitation and jellification after thermal processing, even at 3% w/w concentration. Consumer knowledge on nutritional advantages of whey protein increased the request for using this protein in different food systems especially RTD beverages. These results could make a huge difference in this industry.

Keywords: high protein whey beverage, micropartiqulation, two-dimentional mechanical treatments, thermodynamic properties

Procedia PDF Downloads 47
208 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 391
207 In-Vitro Evaluation of the Long-Term Stability of PEDOT:PSS Coated Microelectrodes for Chronic Recording and Electrical Stimulation

Authors: A. Schander, T. Tessmann, H. Stemmann, S. Strokov, A. Kreiter, W. Lang

Abstract:

For the chronic application of neural prostheses and other brain-computer interfaces, long-term stable microelectrodes for electrical stimulation are essential. In recent years many developments were done to investigate different appropriate materials for these electrodes. One of these materials is the electrical conductive polymer poly(3,4-ethylenedioxythiophene) (PEDOT), which has lower impedance and higher charge injection capacity compared to noble metals like gold and platinum. However the long-term stability of this polymer is still unclear. Thus this paper reports on the in-vitro evaluation of the long-term stability of PEDOT coated gold microelectrodes. For this purpose a highly flexible electrocorticography (ECoG) electrode array, based on the polymer polyimide, is used. This array consists of circular gold electrodes with a diameter of 560 µm (0.25 mm2). In total 25 electrodes of this array were coated simultaneously with the polymer PEDOT:PSS in a cleanroom environment using a galvanostatic electropolymerization process. After the coating the array is additionally sterilized using a steam sterilization process (121°C, 1 bar, 20.5 min) to simulate autoclaving prior to the implantation of such an electrode array. The long-term measurements were performed in phosphate-buffered saline solution (PBS, pH 7.4) at the constant body temperature of 37°C. For the in-vitro electrical stimulation a one channel bipolar current stimulator is used. The stimulation protocol consists of a bipolar current amplitude of 5 mA (cathodal phase first), a pulse duration of 100 µs per phase, a pulse pause of 50 µs and a frequency of 1 kHz. A PEDOT:PSS coated gold electrode with an area of 1 cm2 serves as the counter electrode. The electrical stimulation is performed continuously with a total amount of 86.4 million bipolar current pulses per day. The condition of the PEDOT coated electrodes is monitored in between with electrical impedance spectroscopy measurements. The results of this study demonstrate that the PEDOT coated electrodes are stable for more than 3.6 billion bipolar current pulses. Also the unstimulated electrodes show currently no degradation after the time period of 5 months. These results indicate an appropriate long-term stability of this electrode coating for chronic recording and electrical stimulation. The long-term measurements are still continuing to investigate the life limit of this electrode coating.

Keywords: chronic recording, electrical stimulation, long-term stability, microelectrodes, PEDOT

Procedia PDF Downloads 561
206 Testing Supportive Feedback Strategies in Second/Foreign Language Vocabulary Acquisition between Typically Developing Children and Children with Learning Disabilities

Authors: Panagiota A. Kotsoni, George S. Ypsilandis

Abstract:

Learning an L2 is a demanding process for all students and in particular for those with learning disabilities (LD) who demonstrate an inability to catch up with their classmates’ progress in a given period of time. This area of study, i.e. examining children with learning disabilities in L2 has not (yet) attracted the growing interest that is registered in L1 and thus remains comparatively neglected. It is this scientific field that this study wishes to contribute to. The longitudinal purpose of this study is to locate effective Supportive Feedback Strategies (SFS) and add to the quality of learning in second language vocabulary in both typically developing (TD) and LD children. Specifically, this study aims at investigating and comparing the performance of TD with LD children on two different types of SFSs related to vocabulary short and long-term retention. In this study two different SFSs have been examined to a total of ten (10) unknown vocabulary items. Both strategies provided morphosyntactic clarifications upon new contextualized vocabulary items. The traditional SFS (direct) provided the information only in one hypertext page with a selection on the relevant item. The experimental SFS (engaging) provided the exact same split information in three successive hypertext pages in the form of a hybrid dialogue asking from the subjects to move on to the next page by selecting the relevant link. It was hypothesized that this way the subjects would engage in their own learning process by actively asking for more information which would further lead to their better retention. The participants were fifty-two (52) foreign language learners (33 TD and 19 LD) aged from 9 to 12, attending an English language school at the level of A1 (CEFR). The design of the study followed a typical pre-post-post test procedure after an hour and after a week. The results indicated statistically significant group differences with TD children performing significantly better than the LD group in both short and long-term memory measurements and in both SFSs. As regards the effectiveness of one SFS over another the initial hypothesis was not supported by the evidence as the traditional SFS was more effective compared to the experimental one in both TD and LD children. This difference proved to be statistically significant only in the long-term memory measurement and only in the TD group. It may be concluded that the human brain seems to adapt to different SFS although it shows a small preference when information is provided in a direct manner.

Keywords: learning disabilities, memory, second/foreign language acquisition, supportive feedback

Procedia PDF Downloads 263
205 Biological Institute Actions for Bovine Mastitis Monitoring in Low Income Dairy Farms, Brazil: Preliminary Data

Authors: Vanessa Castro, Liria H. Okuda, Daniela P. Chiebao, Adriana H. C. N. Romaldini, Harumi Hojo, Marina Grandi, Joao Paulo A. Silva, Alessandra F. C. Nassar

Abstract:

The Biological Institute of Sao Paulo, in partnership with a private company, develops an Animal Health Family Farming Program (Prosaf) to enable communication among smallholder farmers and scientists, on-farm consulting and lectures, solving health questions that will benefit agricultural productivity. In Vale do Paraiba region, a dairy region of Sao Paulo State, southern Brazil, many of these types of farms are found with several milk quality problems. Most of these farms are profit-based business; however, with non-technified cattle rearing systems and uncertain veterinary assistance. Feedback from Prosaf showed that the biggest complaints from farmers were low milk production, sick animals and, mainly, loss of selling price due to a high somatic cell count (SCC) and a total bacterial count (TBC). The aims of this study were to improve milk quality, animal hygiene and herd health status by adjustments into general management practices and introducing techniques of sanitary control and milk monitoring in five dairy farms from Sao Jose do Barreiro municipality, Sao Paulo State, Brazil, to increase their profits. A total of 119 milk samples from 56 animals positive for California Mastitis Test (CMT) were collected. The positive CMT indicates subclinical mastitis, therefore laboratorial exams were performed in the milk (microbiological, biochemical and antibiogram test) detect the presence of Staphylococcus aureus (41.8%), Bacillus sp. (11.8%), Streptococcus sp. (2.1%), nonfermenting, motile and oxidase-negative Gram-negative Bacilli (2.1%) and Enterobacter (2.1%). Antibiograms revealed high resistance to gentamicin and streptomycin, probably due to indiscriminate use of antibiotics without veterinarian prescription. We suggested the improvement of hygiene management in the complete milking and cooling tanks system. Using the results of the laboratory tests, animals were properly treated, and the effects observed were better CMT outcomes, lower SCCs, and TBCs leading to an increase in milk pricing. This study will have a positive impact on the family farmers from Sao Paulo State dairy region by improving their market milk competitiveness.

Keywords: milk, family farming, food quality, antibiogram, profitability

Procedia PDF Downloads 124
204 Destigmatising Generalised Anxiety Disorder: The Differential Effects of Causal Explanations on Stigma

Authors: John McDowall, Lucy Lightfoot

Abstract:

Stigma constitutes a significant barrier to the recovery and social integration of individuals affected by mental illness. Although there is some debate in the literature regarding the definition and utility of stigma as a concept, it is widely accepted that it comprises three components: stereotypical beliefs, prejudicial reactions, and discrimination. Stereotypical beliefs describe the cognitive knowledge-based component of stigma, referring to beliefs (often negative) about members of a group that is based on cultural and societal norms (e.g. ‘People with anxiety are just weak’). Prejudice refers to the affective/evaluative component of stigma and describes the endorsement of negative stereotypes and the resulting negative emotional reactions (e.g. ‘People with anxiety are just weak, and they frustrate me’). Discrimination refers to the behavioural component of stigma, which is arguably the most problematic, as it exerts a direct effect on the stigmatized person and may lead people to behave in a hostile or avoidant way towards them (i.e. refusal to hire them). Research exploring anti-stigma initiatives focus primarily on an educational approach, with the view that accurate information will replace misconceptions and decrease stigma. Many approaches take a biogenetic stance, emphasising brain and biochemical deficits - the idea being that ‘mental illness is an illness like any other.' While this approach tends to effectively reduce blame, it has also demonstrated negative effects such as increasing prognostic pessimism, the desire for social distance and perceptions of stereotypes. In the present study 144 participants were split into three groups and read one of three vignettes presenting causal explanations for Generalised Anxiety Disorder (GAD): One explanation emphasized biogenetic factors as being important in the etiology of GAD, another emphasised psychosocial factors (e.g. aversive life events, poverty, etc.), and a third stressed the adaptive features of the disorder from an evolutionary viewpoint. A variety of measures tapping the various components of stigma were administered following the vignettes. No difference in stigma measures as a function of causal explanation was found. People who had contact with mental illness in the past were significantly less stigmatising across a wide range of measures, but this did not interact with the type of causal explanation.

Keywords: generalised anxiety disorder, discrimination, prejudice, stigma

Procedia PDF Downloads 257
203 Antioxidative, Anticholinesterase and Anti-Neuroinflammatory Properties of Malaysian Brown and Green Seaweeds

Authors: Siti Aisya Gany, Swee Ching Tan, Sook Yee Gan

Abstract:

Diminished antioxidant defense or increased production of reactive oxygen species in the biological system can result in oxidative stress which may lead to various neurodegenerative diseases including Alzheimer’s disease (AD). Microglial activation also contributes to the progression of AD by producing several pro-inflammatory cytokines, nitric oxide (NO), and prostaglandin E2 (PGE2). Oxidative stress and inflammation have been reported to be possible pathophysiological mechanisms underlying AD. In addition, the cholinergic hypothesis postulates that memory impairment in patient with AD is also associated with the deficit of cholinergic function in the brain. Although a number of drugs have been approved for the treatment of AD, most of these synthetic drugs have diverse side effects and yield relatively modest benefits. Marine algae have great potential in pharmaceutical and biomedical applications as they are valuable sources of bioactive properties such as anti-coagulation, anti-microbial, anti-oxidative, anti-cancer and anti-inflammatory. Hence, this study aimed to provide an overview of the properties of Malaysian seaweeds (Padina australis, Sargassum polycystum and Caulerpa racemosa) in inhibiting oxidative stress, neuroinflammation and cholinesterase enzymes. All tested samples significantly exhibit potent DPPH and moderate Superoxide anion radical scavenging ability (P<0.05). Hexane and methanol extracts of S. polycystum exhibited the most potent radical scavenging ability with IC50 values of 0.1572 ± 0.004 mg/ml and 0.8493 ± 0.02 for DPPH and ABTS assays, respectively. Hexane extract of C. racemosa gave the strongest superoxide radical inhibitory effect (IC50 of 0.3862± 0.01 mg/ml). Most seaweed extracts significantly inhibited the production of cytokine (IL-6, IL-1 β, TNFα) and NO in a concentration-dependent manner without causing significant cytotoxicity to the lipopolysaccharide (LPS)-stimulated microglia cells (P<0.05). All extracts suppressed cytokine and NO level by more than 80% at the concentration of 0.4mg/ml. In addition, C. racemosa and S. polycystum also showed anti-acetylcholinesterase activities with the IC50 values ranging from 0.086-0.115 mg/ml. Moreover, C. racemosa and P. australis were also found to be active against butyrylcholinesterase with IC50 values ranging from 0.118-0.287 mg/ml.

Keywords: anti-cholinesterase, anti-oxidative, neuroinflammation, seaweeds

Procedia PDF Downloads 642
202 Quality of Life after Damage Control Laparotomy for Trauma

Authors: Noman Shahzad, Amyn Pardhan, Hasnain Zafar

Abstract:

Introduction: Though short term survival advantage of damage control laparotomy in management of critically ill trauma patients is established, there is little known about the long-term quality of life of these patients. Facial closure rate after damage control laparotomy is reported to be 20-70 percent. Abdominal wall reconstruction in those who failed to achieve facial closure is challenging and can potentially affect quality of life of these patients. Methodology: We conducted retrospective matched cohort study. Adult patients who underwent damage control laparotomy from Jan 2007 till Jun 2013 were identified through medical record. Patients who had concomitant disabling brain injury or limb injuries requiring amputation were excluded. Age, gender and presentation time matched non exposure group of patients who underwent laparotomy for trauma but no damage control were identified for each damage control laparotomy patient. Quality of life assessment was done via telephonic interview at least one year after the operation, using Urdu version of EuroQol Group Quality of Life (QOL) questionnaire EQ5D after permission. Wilcoxon signed rank test was used to compare QOL scores and McNemar test was used to compare individual parameters of QOL questionnaire. Study was approved by institutional ethical review committee. Results: Out of 32 patients who underwent damage control laparotomy during study period, 20 fulfilled the selection criteria for which 20 matched controls were selected. Median age of patients (IQ Range) was 33 (26-40) years. Facial closure rate in damage control laparotomy group was 40% (8/20). One third of those who did not achieve facial closure (4/12) underwent abdominal wall reconstruction. Self-reported QOL score of damage control laparotomy patients was significantly worse than non-damage control group (p = 0.032). There was no statistically significant difference in two groups regarding individual QOL measures. Significantly, more patients in damage control group were requiring use of abdominal binder, and more patients in damage control group had to either change their job or had limitations in continuing previous job. Our study was not adequately powered to detect factors responsible for worse QOL in damage control group. Conclusion: Quality of life of damage control patients is worse than their age and gender matched patients who underwent trauma laparotomy but not damage control. Adequately powered studies need to be conducted to explore factors responsible for this finding for potential improvement.

Keywords: damage control laparotomy, laparostomy, quality of life

Procedia PDF Downloads 250
201 Analyzing the Causes of Amblyopia among Patients in Tertiary Care Center: Retrospective Study in King Faisal Specialist Hospital and Research Center

Authors: Hebah M. Musalem, Jeylan El-Mansoury, Lin M. Tuleimat, Selwa Alhazza, Abdul-Aziz A. Al Zoba

Abstract:

Background: Amblyopia is a condition that affects the visual system triggering a decrease in visual acuity without a known underlying pathology. It is due to abnormal vision development in childhood or infancy. Most importantly, vision loss is preventable or reversible with the right kind of intervention in most of the cases. Strabismus, sensory defects, and anisometropia are all well-known causes of amblyopia. However, ocular misalignment in Strabismus is considered the most common form of amblyopia worldwide. The risk of developing amblyopia increases in premature children, developmentally delayed or children who had brain lesions affecting the visual pathway. The prevalence of amblyopia varies between 2 to 5 % in the world according to the literature. Objective: To determine the different causes of Amblyopia in pediatric patients seen in ophthalmology clinic of a tertiary care center, i.e. King Faisal Specialist Hospital and Research Center (KFSH&RC). Methods: This is a hospital based, random retrospective, based on reviewing patient’s files in the Ophthalmology Department of KFSH&RC in Riyadh city, Kingdom of Saudi Arabia. Inclusion criteria: amblyopic pediatric patients who attended the clinic from 2015 to 2016, who are between 6 months and 18 years old. Exclusion Criteria: patients above 18 years of age and any patient who is uncooperative to obtain an accurate vision or a proper refraction. Detailed ocular and medical history are recorded. The examination protocol includes a full ocular exam, full cycloplegic refraction, visual acuity measurement, ocular motility and strabismus evaluation. All data were organized in tables and graphs and analyzed by statistician. Results: Our preliminary results will be discussed on spot by our corresponding author. Conclusions: We focused on this study on utilizing various examination techniques which enhanced our results and highlighted a distinguished correlation between amblyopia and its’ causes. This paper recommendation emphasizes on critical testing protocols to be followed among amblyopic patient, especially in tertiary care centers.

Keywords: amblyopia, amblyopia causes, amblyopia diagnostic criterion, amblyopia prevalence, Saudi Arabia

Procedia PDF Downloads 134
200 Effects of a Head Mounted Display Adaptation on Reaching Behaviour: Implications for a Therapeutic Approach in Unilateral Neglect

Authors: Taku Numao, Kazu Amimoto, Tomoko Shimada, Kyohei Ichikawa

Abstract:

Background: Unilateral spatial neglect (USN) is a common syndrome following damage to one hemisphere of the brain (usually the right side), in which a patient fails to report or respond to stimulation from the contralesional side. These symptoms are not due to primary sensory or motor deficits, but instead, reflect an inability to process input from that side of their environment. Prism adaptation (PA) is a therapeutic treatment for USN, wherein a patient’s visual field is artificially shifted laterally, resulting in a sensory-motor adaptation. However, patients with USN also tend to perceive a left-leaning subjective vertical in the frontal plane. The traditional PA cannot be used to correct a tilt in the subjective vertical, because a prism can only polarize, not twist, the surroundings. However, this can be accomplished using a head mounted display (HMD) and a web-camera. Therefore, this study investigated whether an HMD system could be used to correct the spatial perception of USN patients in the frontal as well as the horizontal plane. We recruited healthy subjects in order to collect data for the refinement of USN patient therapy. Methods: Eight healthy subjects sat on a chair wearing a HMD (Oculus rift DK2), with a web-camera (Ovrvision) displaying a 10 degree leftward rotation and a 10 degree counter-clockwise rotation along the frontal plane. Subjects attempted to point a finger at one of four targets, assigned randomly, a total of 48 times. Before and after the intervention, each subject’s body-centre judgment (BCJ) was tested by asking them to point a finger at a touch panel straight in front of their xiphisternum, 10 times sight unseen. Results: Intervention caused the location pointed to during the BCJ to shift 35 ± 17 mm (Ave ± SD) leftward in the horizontal plane, and 46 ± 29 mm downward in the frontal plane. The results in both planes were significant by paired-t-test (p<.01). Conclusions: The results in the horizontal plane are consistent with those observed following PA. Furthermore, the HMD and web-camera were able to elicit 3D effects, including in both the horizontal and frontal planes. Future work will focus on applying this method to patients with and without USN, and investigating whether subject posture is also affected by the HMD system.

Keywords: head mounted display, posture, prism adaptation, unilateral spatial neglect

Procedia PDF Downloads 257
199 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 90
198 EverPro as the Missing Piece in the Plant Protein Portfolio to Aid the Transformation to Sustainable Food Systems

Authors: Aylin W Sahin, Alice Jaeger, Laura Nyhan, Gregory Belt, Steffen Münch, Elke K. Arendt

Abstract:

Our current food systems cause an increase in malnutrition resulting in more people being overweight or obese in the Western World. Additionally, our natural resources are under enormous pressure and the greenhouse gas emission increases yearly with a significant contribution to climate change. Hence, transforming our food systems is of highest priority. Plant-based food products have a lower environmental impact compared to their animal-based counterpart, representing a more sustainable protein source. However, most plant-based protein ingredients, such as soy and pea, are lacking indispensable amino acids and extremely limited in their functionality and, thus, in their food application potential. They are known to have a low solubility in water and change their properties during processing. The low solubility displays the biggest challenge in the development of milk alternatives leading to inferior protein content and protein quality in dairy alternatives on the market. Moreover, plant-based protein ingredients often possess an off-flavour, which makes them less attractive to consumers. EverPro, a plant-protein isolate originated from Brewer’s Spent Grain, the most abundant by-product in the brewing industry, represents the missing piece in the plant protein portfolio. With a protein content of >85%, it is of high nutritional value, including all indispensable amino acids which allows closing the protein quality gap of plant proteins. Moreover, it possesses high techno-functional properties. It is fully soluble in water (101.7 ± 2.9%), has a high fat absorption capacity (182.4 ± 1.9%), and a foaming capacity which is superior to soy protein or pea protein. This makes EverPro suitable for a vast range of food applications. Furthermore, it does not cause changes in viscosity during heating and cooling of dispersions, such as beverages. Besides its outstanding nutritional and functional characteristics, the production of EverPro has a much lower environmental impact compared to dairy or other plant protein ingredients. Life cycle assessment analysis showed that EverPro has the lowest impact on global warming compared to soy protein isolate, pea protein isolate, whey protein isolate, and egg white powder. It also contributes significantly less to freshwater eutrophication, marine eutrophication and land use compared the protein sources mentioned above. EverPro is the prime example of sustainable ingredients, and the type of plant protein the food industry was waiting for: nutritious, multi-functional, and environmentally friendly.

Keywords: plant-based protein, upcycled, brewers' spent grain, low environmental impact, highly functional ingredient

Procedia PDF Downloads 55
197 Assessment of the Effects of Urban Development on Urban Heat Islands and Community Perception in Semi-Arid Climates: Integrating Remote Sensing, GIS Tools, and Social Analysis - A Case Study of the Aures Region (Khanchela), Algeria

Authors: Amina Naidja, Zedira Khammar, Ines Soltani

Abstract:

This study investigates the impact of urban development on the urban heat island (UHI) effect in the semi-arid Aures region of Algeria, integrating remote sensing data with statistical analysis and community surveys to examine the interconnected environmental and social dynamics. Using Landsat 8 satellite imagery, temporal variations in the Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-up Index (NDBI), and land use/land cover (LULC) changes are analyzed to understand patterns of urbanization and environmental transformation. These environmental metrics are correlated with land surface temperature (LST) data derived from remote sensing to quantify the UHI effect. To incorporate the social dimension, a structured questionnaire survey is conducted among residents in selected urban areas. The survey assesses community perceptions of urban heat, its impacts on daily life, health concerns, and coping strategies. Statistical analysis is employed to analyze survey responses, identifying correlations between demographic factors, socioeconomic status, and perceived heat stress. Preliminary findings reveal significant correlations between built-up areas (NDBI) and higher LST, indicating the contribution of urbanization to local warming. Conversely, areas with higher vegetation cover (NDVI) exhibit lower LST, highlighting the cooling effect of green spaces. Social survey results provide insights into how UHI affects different demographic groups, with vulnerable populations experiencing greater heat-related challenges. By integrating remote sensing analysis with statistical modeling and community surveys, this study offers a comprehensive understanding of the environmental and social implications of urban development in semi-arid climates. The findings contribute to evidence-based urban planning strategies that prioritize environmental sustainability and social well-being. Future research should focus on policy recommendations and community engagement initiatives to mitigate UHI impacts and promote climate-resilient urban development.

Keywords: urban heat island, remote sensing, social analysis, NDVI, NDBI, LST, community perception

Procedia PDF Downloads 12
196 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 59
195 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide

Authors: Michalina Aniola, Andrzej Katrusiak

Abstract:

4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.

Keywords: conformation, high-pressure, negative area compressibility, polymorphism

Procedia PDF Downloads 217
194 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 363
193 The Yield of Neuroimaging in Patients Presenting to the Emergency Department with Isolated Neuro-Ophthalmological Conditions

Authors: Dalia El Hadi, Alaa Bou Ghannam, Hala Mostafa, Hana Mansour, Ibrahim Hashim, Soubhi Tahhan, Tharwat El Zahran

Abstract:

Introduction: Neuro-ophthalmological emergencies require prompt assessment and management to avoid vision or life-threatening sequelae. Some would require neuroimaging. Most commonly used are the CT and MRI of the Brain. They can be over-used when not indicated. Their yield remains dependent on multiple factors relating to the clinical scenario. Methods: A retrospective cross-sectional study was conducted by reviewing the electronic medical records of patients presenting to the Emergency Department (ED) with isolated neuro-ophthalmologic complaints. For each patient, data were collected on the clinical presentation, whether neuroimaging was performed (and which type), and the result of neuroimaging. Analysis of the performed neuroimaging was made, and its yield was determined. Results: A total of 211 patients were reviewed. The complaints or symptoms at presentation were: blurry vision, change in the visual field, transient vision loss, floaters, double vision, eye pain, eyelid droop, headache, dizziness and others such as nausea or vomiting. In the ED, a total of 126 neuroimaging procedures were performed. Ninety-four imagings (74.6%) were normal, while 32 (25.4%) had relevant abnormal findings. Only 2 symptoms were significant for abnormal imaging: blurry vision (p-value= 0.038) and visual field change (p-value= 0.014). While 4 physical exam findings had significant abnormal imaging: visual field defect (p-value= 0.016), abnormal pupil reactivity (p-value= 0.028), afferent pupillary defect (p-value= 0.018), and abnormal optic disc exam (p-value= 0.009). Conclusion: Risk indicators for abnormal neuroimaging in the setting of neuro-ophthalmological emergencies are blurred vision or changes in the visual field on history taking. While visual field irregularities, abnormal pupil reactivity with or without afferent pupillary defect, or abnormal optic discs, are risk factors related to physical testing. These findings, when present, should sway the ED physician towards neuroimaging but still individualizing each case is of utmost importance to prevent time-consuming, resource-draining, and sometimes unnecessary workup. In the end, it suggests a well-structured patient-centered algorithm to be followed by ED physicians.

Keywords: emergency department, neuro-ophthalmology, neuroimaging, risk indicators

Procedia PDF Downloads 153
192 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger

Authors: Mounir Baccar, Imen Jmal

Abstract:

Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.

Keywords: heat transfer enhancement, front solidification, PCM, natural convection

Procedia PDF Downloads 165
191 The Maps of Meaning (MoM) Consciousness Theory

Authors: Scott Andersen

Abstract:

Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.

Keywords: consciousness, perception, prospection, embodiment

Procedia PDF Downloads 17
190 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 154