Search results for: high density lipoprotein cholesterol
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22258

Search results for: high density lipoprotein cholesterol

1468 Curcumin Nanomedicine: A Breakthrough Approach for Enhanced Lung Cancer Therapy

Authors: Shiva Shakori Poshteh

Abstract:

Lung cancer is a highly prevalent and devastating disease, representing a significant global health concern with profound implications for healthcare systems and society. Its high incidence, mortality rates, and late-stage diagnosis contribute to its formidable nature. To address these challenges, nanoparticle-based drug delivery has emerged as a promising therapeutic strategy. Curcumin (CUR), a natural compound derived from turmeric, has garnered attention as a potential nanomedicine for lung cancer treatment. Nanoparticle formulations of CUR offer several advantages, including improved drug delivery efficiency, enhanced stability, controlled release kinetics, and targeted delivery to lung cancer cells. CUR exhibits a diverse array of effects on cancer cells. It induces apoptosis by upregulating pro-apoptotic proteins, such as Bax and Bak, and downregulating anti-apoptotic proteins, such as Bcl-2. Additionally, CUR inhibits cell proliferation by modulating key signaling pathways involved in cancer progression. It suppresses the PI3K/Akt pathway, crucial for cell survival and growth, and attenuates the mTOR pathway, which regulates protein synthesis and cell proliferation. CUR also interferes with the MAPK pathway, which controls cell proliferation and survival, and modulates the Wnt/β-catenin pathway, which plays a role in cell proliferation and tumor development. Moreover, CUR exhibits potent antioxidant activity, reducing oxidative stress and protecting cells from DNA damage. Utilizing CUR as a standalone treatment is limited by poor bioavailability, lack of targeting, and degradation susceptibility. Nanoparticle-based delivery systems can overcome these challenges. They enhance CUR’s bioavailability, protect it from degradation, and improve absorption. Further, Nanoparticles enable targeted delivery to lung cancer cells through surface modifications or ligand-based targeting, ensuring sustained release of CUR to prolong therapeutic effects, reduce administration frequency, and facilitate penetration through the tumor microenvironment, thereby enhancing CUR’s access to cancer cells. Thus, nanoparticle-based CUR delivery systems promise to improve lung cancer treatment outcomes. This article provides an overview of lung cancer, explores CUR nanoparticles as a treatment approach, discusses the benefits and challenges of nanoparticle-based drug delivery, and highlights prospects for CUR nanoparticles in lung cancer treatment. Future research aims to optimize these delivery systems for improved efficacy and patient prognosis in lung cancer.

Keywords: lung cancer, curcumin, nanomedicine, nanoparticle-based drug delivery

Procedia PDF Downloads 72
1467 An Assessment of Involuntary Migration in India: Understanding Issues and Challenges

Authors: Rajni Singh, Rakesh Mishra, Mukunda Upadhyay

Abstract:

India is among the nations born out of partition that led to one of the greatest forced migrations that marked the past century. The Indian subcontinent got partitioned into two nation-states, namely India and Pakistan. This led to an unexampled mass displacement of people accounting for about 20 million in the subcontinent as a whole. This exemplifies the socio-political version of displacement, but there are other identified reasons leading to human displacement viz., natural calamities, development projects and people-trafficking and smuggling. Although forced migrations are rare in incidence, they are mostly region-specific and a very less percentage of population appears to be affected by it. However, when this percentage is transcripted in terms of volume, the real impact created by such migration can be realized. Forced migration is thus an issue related to the lives of many people and requires to be addressed with proper intervention. Forced or involuntary migration decimates peoples' assets while taking from them their most basic resources and makes them migrate without planning and intention. This in most cases proves to be a burden on the destination resources. Thus, the question related to their security concerns arise profoundly with regard to the protection and safeguards to these migrants who need help at the place of destination. This brings the human security dimension of forced migration into picture. The present study is an analysis of a sample of 1501 persons by NSSO in India (National Sample Survey Organisation), which identifies three reasons for forced migration- natural disaster, social/political problem and displacement by development projects. It was observed that, of the total forced migrants, about 4/5th comprised of the internally displaced persons. However, there was a huge inflow of such migrants to the country from across the borders also, the major contributing countries being Bangladesh, Pakistan, Sri Lanka, Gulf countries and Nepal. Among the three reasons for involuntary migration, social and political problem is the most prominent in displacing huge masses of population; it is also the reason where the share of international migrants to that of internally displaced is higher compared to the other two factors /reasons. Second to political and social problems, natural calamities displaced a high portion of the involuntary migrants. The present paper examines the factors which increase people's vulnerability to forced migration. On perusing the background characteristics of the migrants it was seen that those who were economically weak and socially fragile are more susceptible to migration. Therefore, getting an insight about this fragile group of society is required so that government policies can benefit these in the most efficient and targeted manner.

Keywords: involuntary migration, displacement, natural disaster, social and political problem

Procedia PDF Downloads 354
1466 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 352
1465 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study

Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart

Abstract:

Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.

Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning

Procedia PDF Downloads 100
1464 Learning to Translate by Learning to Communicate to an Entailment Classifier

Authors: Szymon Rutkowski, Tomasz Korbak

Abstract:

We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.

Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning

Procedia PDF Downloads 128
1463 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 95
1462 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves

Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann

Abstract:

Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.

Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves

Procedia PDF Downloads 46
1461 Preoperative Anxiety Evaluation: Comparing the Visual Facial Anxiety Scale/Yumul Faces Anxiety Scale, Numerical Verbal Rating Scale, Categorization Scale, and the State-Trait Anxiety Inventory

Authors: Roya Yumul, Chse, Ofelia Loani Elvir Lazo, David Chernobylsky, Omar Durra

Abstract:

Background: Preoperative anxiety has been shown to be caused by the fear associated with surgical and anesthetic complications; however, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting given the duration and concentration required to complete the 40-item extensive questionnaire. Our primary aim in the study is to investigate the correlation of the Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparison analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusions: Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul FACES Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul FACES Anxiety Scale are merited.

Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale

Procedia PDF Downloads 141
1460 Surface Defect-engineered Ceo₂−x by Ultrasound Treatment for Superior Photocatalytic H₂ Production and Water Treatment

Authors: Nabil Al-Zaqri

Abstract:

Semiconductor photocatalysts with surface defects display incredible light absorption bandwidth, and these defects function as highly active sites for oxidation processes by interacting with the surface band structure. Accordingly, engineering the photocatalyst with surface oxygen vacancies will enhance the semiconductor nanostructure's photocatalytic efficiency. Herein, a CeO2₋ₓ nanostructure is designed under the influence of low-frequency ultrasonic waves to create surface oxygen vacancies. This approach enhances the photocatalytic efficiency compared to many heterostructures while keeping the intrinsiccrystal structure intact. Ultrasonic waves induce the acoustic cavitation effect leading to the dissemination of active elements on the surface, which results in vacancy formation in conjunction with larger surface area and smaller particle size. The structural analysis of CeO₂₋ₓ revealed higher crystallinity, as well as morphological optimization, and the presence of oxygen vacancies is verified through Raman, X-rayphotoelectron spectroscopy, temperature-programmed reduction, photoluminescence, and electron spinresonance analyses. Oxygen vacancies accelerate the redox cycle between Ce₄+ and Ce₃+ by prolongingphotogenerated charge recombination. The ultrasound-treated pristine CeO₂ sample achieved excellenthydrogen production showing a quantum efficiency of 1.125% and efficient organic degradation. Ourpromising findings demonstrated that ultrasonic treatment causes the formation of surface oxygenvacancies and improves photocatalytic hydrogen evolution and pollution degradation. Conclusion: Defect engineering of the ceria nanoparticles with oxygen vacancies was achieved for the first time using low-frequency ultrasound treatment. The U-CeO₂₋ₓsample showed high crystallinity, and morphological changes were observed. Due to the acoustic cavitation effect, a larger surface area and small particle size were observed. The ultrasound treatment causes particle aggregation and surface defects leading to oxygen vacancy formation. The XPS, Raman spectroscopy, PL spectroscopy, and ESR results confirm the presence of oxygen vacancies. The ultrasound-treated sample was also examined for pollutant degradation, where 1O₂was found to be the major active species. Hence, the ultrasound treatment influences efficient photocatalysts for superior hydrogen evolution and an excellent photocatalytic degradation of contaminants. The prepared nanostructure showed excellent stability and recyclability. This work could pave the way for a unique post-synthesis strategy intended for efficient photocatalytic nanostructures.

Keywords: surface defect, CeO₂₋ₓ, photocatalytic, water treatment, H₂ production

Procedia PDF Downloads 141
1459 Neuroprotection against N-Methyl-D-Aspartate-Induced Optic Nerve and Retinal Degeneration Changes by Philanthotoxin-343 to Alleviate Visual Impairments Involve Reduced Nitrosative Stress

Authors: Izuddin Fahmy Abu, Mohamad Haiqal Nizar Mohamad, Muhammad Fattah Fazel, Renu Agarwal, Igor Iezhitsa, Nor Salmah Bakar, Henrik Franzyk, Ian Mellor

Abstract:

Glaucoma is the global leading cause of irreversible blindness. Currently, the available treatment strategy only involves lowering intraocular pressure (IOP); however, the condition often progresses despite lowered or normal IOP in some patients. N-methyl-D-aspartate receptor (NMDAR) excitotoxicity often occurs in neurodegeneration-related glaucoma; thus it is a relevant target to develop a therapy based on neuroprotection approach. This study investigated the effects of Philanthotoxin-343 (PhTX-343), an NMDAR antagonist, on the neuroprotection of NMDA-induced glaucoma to alleviate visual impairments. Male Sprague-Dawley rats were equally divided: Groups 1 (control) and 2 (glaucoma) were intravitreally injected with phosphate buffer saline (PBS) and NMDA (160nM), respectively, while group 3 was pre-treated with PhTX-343 (160nM) 24 hours prior to NMDA injection. Seven days post-treatments, rats were subjected to visual behavior assessments and subsequently euthanized to harvest their retina and optic nerve tissues for histological analysis and determination of nitrosative stress level using 3-nitrotyrosine ELISA. Visual behavior assessments via open field, object, and color recognition tests demonstrated poor visual performance in glaucoma rats indicated by high exploratory behavior. PhTX-343 pre-treatment appeared to preserve visual abilities as all test results were significantly improved (p < 0.05). H&E staining of the retina showed a marked reduction of ganglion cell layer thickness in the glaucoma group; in contrast, PhTX-343 significantly increased the number by 1.28-folds (p < 0.05). PhTX-343 also increased the number of cell nuclei/100μm2 within inner retina by 1.82-folds compared to the glaucoma group (p < 0.05). Toluidine blue staining of optic nerve tissues showed that PhTX-343 reduced the degeneration changes compared to the glaucoma group which exhibited vacuolation overall sections. PhTX-343 also decreased retinal 3- nitrotyrosine concentration by 1.74-folds compared to the glaucoma group (p < 0.05). All results in PhTX-343 group were comparable to control (p > 0.05). We conclude that PhTX-343 protects against NMDA-induced changes and visual impairments in the rat model by reducing nitrosative stress levels.

Keywords: excitotoxicity, glaucoma, nitrosative stress , NMDA receptor , N-methyl-D-aspartate , philanthotoxin, visual behaviour

Procedia PDF Downloads 137
1458 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 74
1457 Molecular Detection of mRNA bcr-abl and Circulating Leukemic Stem Cells CD34+ in Patients with Acute Lymphoblastic Leukemia and Chronic Myeloid Leukemia and Its Association with Clinical Parameters

Authors: B. Gonzalez-Yebra, H. Barajas, P. Palomares, M. Hernandez, O. Torres, M. Ayala, A. L. González, G. Vazquez-Ortiz, M. L. Guzman

Abstract:

Leukemia arises by molecular alterations of the normal hematopoietic stem cell (HSC) transforming it into a leukemic stem cell (LSC) with high cell proliferation, self-renewal, and cell differentiation. Chronic myeloid leukemia (CML) originates from an LSC-leading to elevated proliferation of myeloid cells and acute lymphoblastic leukemia (ALL) originates from an LSC development leading to elevated proliferation of lymphoid cells. In both cases, LSC can be identified by multicolor flow cytometry using several antibodies. However, to date, LSC levels in peripheral blood (PB) are not established well enough in ALL and CML patients. On the other hand, the detection of the minimal residue disease (MRD) in leukemia is mainly based on the identification of the mRNA bcr-abl gene in CML patients and some other genes in ALL patients. There is no a properly biomarker to detect MDR in both types of leukemia. The objective of this study was to determine mRNA bcr-abl and the percentage of LSC in peripheral blood of patients with CML and ALL and identify a possible association between the amount of LSC in PB and clinical data. We included in this study 19 patients with Leukemia. A PB sample was collected per patient and leukocytes were obtained by Ficoll gradient. The immunophenotype for LSC CD34+ was done by flow cytometry analysis with CD33, CD2, CD14, CD16, CD64, HLA-DR, CD13, CD15, CD19, CD10, CD20, CD34, CD38, CD71, CD90, CD117, CD123 monoclonal antibodies. In addition, to identify the presence of the mRNA bcr-abl by RT-PCR, the RNA was isolated using TRIZOL reagent. Molecular (presence of mRNA bcr-abl and LSC CD34+) and clinical results were analyzed with descriptive statistics and a multiple regression analysis was performed to determine statistically significant association. In total, 19 patients (8 patients with ALL and 11 patients with CML) were analyzed, 9 patients with de novo leukemia (ALL = 6 and CML = 3) and 10 under treatment (ALL = 5 and CML = 5). The overall frequency of mRNA bcr-abl was 31% (6/19), and it was negative in ALL patients and positive in 80% in CML patients. On the other hand, LSC was determined in 16/19 leukemia patients (%LSC= 0.02-17.3). The Novo patients had higher percentage of LSC (0.26 to 17.3%) than patients under treatment (0 to 5.93%). The amount of LSC was significantly associated with the amount of LSC were: absence of treatment, the absence of splenomegaly, and a lower number of leukocytes, negative association for the clinical variables age, sex, blasts, and mRNA bcr-abl. In conclusion, patients with de novo leukemia had a higher percentage of circulating LSC than patients under treatment, and it was associated with clinical parameters as lack of treatment, absence of splenomegaly and a lower number of leukocytes. The mRNA bcr-abl detection was only possible in the series of patients with CML, and molecular detection of LSC could be identified in the peripheral blood of all leukemia patients, we believe the identification of circulating LSC may be used as biomarker for the detection of the MRD in leukemia patients.

Keywords: stem cells, leukemia, biomarkers, flow cytometry

Procedia PDF Downloads 357
1456 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 107
1455 Assessment of N₂ Fixation and Water-Use Efficiency in a Soybean-Sorghum Rotation System

Authors: Mmatladi D. Mnguni, Mustapha Mohammed, George Y. Mahama, Alhassan L. Abdulai, Felix D. Dakora

Abstract:

Industrial-based nitrogen (N) fertilizers are justifiably credited for the current state of food production across the globe, but their continued use is not sustainable and has an adverse effect on the environment. The search for greener and sustainable technologies has led to an increase in exploiting biological systems such as legumes and organic amendments for plant growth promotion in cropping systems. Although the benefits of legume rotation with cereal crops have been documented, the full benefits of soybean-sorghum rotation systems have not been properly evaluated in Africa. This study explored the benefits of soybean-sorghum rotation through assessing N₂ fixation and water-use efficiency of soybean in rotation with sorghum with and without organic and inorganic amendments. The field trials were conducted from 2017 to 2020. Sorghum was grown on plots previously cultivated to soybean and vice versa. The succeeding sorghum crop received fertilizer amendments [organic fertilizer (5 tons/ha as poultry litter, OF); inorganic fertilizer (80N-60P-60K) IF; organic + inorganic fertilizer (OF+IF); half organic + inorganic fertilizer (HIF+OF); organic + half inorganic fertilizer (OF+HIF); half organic + half inorganic (HOF+HIF) and control] and was arranged in a randomized complete block design. The soybean crop succeeding fertilized sorghum received a blanket application of triple superphosphate at 26 kg P ha⁻¹. Nitrogen fixation and water-use efficiency were respectively assessed at the flowering stage using the ¹⁵N and ¹³C natural abundance techniques. The results showed that the shoot dry matter of soybean plants supplied with HOF+HIF was much higher (43.20 g plant-1), followed by OF+HIF (36.45 g plant⁻¹), and HOF+IF (33.50 g plant⁻¹). Shoot N concentration ranged from 1.60 to 1.66%, and total N content from 339 to 691 mg N plant⁻¹. The δ¹⁵N values of soybean shoots ranged from -1.17‰ to -0.64‰, with plants growing on plots previously treated to HOF+HIF exhibiting much higher δ¹⁵N values, and hence lower percent N derived from N₂ fixation (%Ndfa). Shoot %Ndfa values varied from 70 to 82%. The high %Ndfa values obtained in this study suggest that the previous year’s organic and inorganic fertilizer amendments to sorghum did not inhibit N₂ fixation in the following soybean crop. The amount of N-fixed by soybean ranged from 106 to 197 kg N ha⁻¹. The treatments showed marked variations in carbon (C) content, with HOF+HIF treatment recording the highest C content. Although water-use efficiency varied from -29.32‰ to -27.85‰, shoot water-use efficiency, C concentration, and C:N ratio were not altered by previous fertilizer application to sorghum. This study provides strong evidence that previous HOF+HIF sorghum residues can enhance N nutrition and water-use efficiency in nodulated soybean.

Keywords: ¹³C and ¹⁵N natural abundance, N-fixed, organic and inorganic fertilizer amendments, shoot %Ndfa

Procedia PDF Downloads 170
1454 Climate Change, Women's Labour Markets and Domestic Work in Mexico

Authors: Luis Enrique Escalante Ochoa

Abstract:

This paper attempts to assess the impacts of Climate change (CC) on inequalities in the labour market. CC will have the most serious effects on some vulnerable economic sectors, such as agriculture, livestock or tourism, but also on the most vulnerable population groups. The objective of this research is to evaluate the impact of CC on the labour market and particularly on Mexican women. Influential documents such as the synthesis reports produced by the Intergovernmental Panel on Climate Change (IPCC) in 2007 and 2014 revived a global effort to counteract the effects of CC, called for an analysis of the impacts on vulnerable socio-economic groups and on economic activities, and for the development of decision-making tools to enable policy and other decisions based on the complexity of the world in relation to climate change, taking into account socio-economic attributes. We follow up this suggestion and determine the impact of CC on vulnerable populations in the Mexican labour market, taking into account two attributes (gender and level of qualification of workers). Most studies have focused on the effects of CC on the agricultural sector, as it is considered a highly vulnerable economic sector to the effects of climate variability. This research seeks to contribute to the existing literature taking into account, in addition to the agricultural sector, other sectors such as tourism, water availability, and energy that are of vital importance to the Mexican economy. Likewise, the effects of climate change will be extended to the labour market and specifically to women who in some cases have been left out. The studies are sceptical about the impact of CC on the female labour market because of the perverse effects on women's domestic work, which are too often omitted from analyses. This work will contribute to the literature by integrating domestic work, which in the case of Mexico is much higher among women than among men (80.9% vs. 19.1%), according to the 2009 time use survey. This study is relevant since it will allow us to analyse impacts of climate change not only in the labour market of the formal economy, but also in the non-market sphere. Likewise, we consider that including the gender dimension is valid for the Mexican economy as it is a country with high degrees of gender inequality in the labour market. In the OECD economic study for Mexico (2017), the low labour participation of Mexican women is highlighted. Although participation has increased substantially in recent years (from 36% in 1990 to 47% in 2017), it remains low compared to the OECD average where women participate around 70% of the labour market. According to Mexico's 2009 time use survey, domestic work represents about 13% of the total time available. Understanding the interdependence between the market and non-market spheres, and the gender division of labour within them is the necessary premise for any economic analysis aimed at promoting gender equality and inclusive growth.

Keywords: climate change, labour market, domestic work, rural sector

Procedia PDF Downloads 131
1453 Use of Cassava Waste and Its Energy Potential

Authors: I. Inuaeyen, L. Phil, O. Eni

Abstract:

Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.

Keywords: bio-refinery, cassava waste, energy, process modelling

Procedia PDF Downloads 374
1452 Different Response of Pure Arctic Char Salvelinus alpinus and Hybrid (Salvelinus alpinus vs. Salvelinus fontinalis Mitchill) to Various Hyperoxic Regimes

Authors: V. Stejskal, K. Lundova, R. Sebesta, T. Vanina, S. Roje

Abstract:

Pure strain of Arctic char (AC) Salvelinus alpinus and hybrid (HB) Salvelinus alpinus vs. Salvelinus fontinalis Mitchill belong to fish, which with great potential for culture in recirculating aquaculture systems (RAS). Aquaculture of these fish currently use flow-through systems (FTS), especially in Nordic countries such as Iceland (biggest producer), Norway, Sweden, and Canada. Four different water saturation regimes included normoxia (NOR), permanent hyperoxia (HYP), intermittent hyperoxia (HYP ± ) and regimes where one day of normoxia was followed by one day of hyperoxia (HYP1/1) were tested during 63 days of experiment in both species in two parallel experiments. Fish were reared in two identical RAS system consisted of 24 plastic round tanks (300 L each), drum filter, biological filter with moving beads and submerged biofilter. The temperature was maintained using flow-through cooler during at level of 13.6 ± 0.8 °C. Different water saturation regimes were achieved by mixing of pure oxygen (O₂) with water in three (one for each hyperoxic regime) mixing tower equipped with flowmeter for regulation of gas inflow. The water in groups HYP, HYP1/1 and HYP± was enriched with oxygen up to saturation of 120-130%. In HYP group was this level kept during whole day. In HYP ± group was hyperoxia kept for daylight phase (08:00-20:00) only and during night time was applied normoxia in this group. The oxygen saturation of 80-90% in NOR group was created using intensive aeration in header tank. The fish were fed with commercial feed to slight excess at 2 h intervals within the light phase of the day. Water quality parameters like pH, temperature and level of oxygen was monitoring three times (7 am, 10 am and 6 pm) per day using handy multimeter. Ammonium, nitrite and nitrate were measured in two day interval using spectrophotometry. Initial body weight (BW) was 40.9 ± 8.7 g and 70.6 ± 14.8 in AC and HB group, respectively. Final survival of AC ranged from 96.3 ± 4.6 (HYP) to 100 ± 0.0% in all other groups without significant differences among these groups. Similarly very high survival was reached in trial with HB with levels from 99.2 ± 1.3 (HYP, HYP1/1 and NOR) to 100 ± 0.0% (HYP ± ). HB fish showed best growth performance in NOR group reached final body weight (BW) 180.4 ± 2.3 g. Fish growth under different hyperoxic regimes was significantly reduced and final BW was 164.4 ± 7.6, 162.1 ± 12.2 and 151.7 ± 6.8 g in groups HY1/1, HYP ± and HYP, respectively. AC showed different preference for hyperoxic regimes as there were no significant difference in BW among NOR, HY1/1 and HYP± group with final values of 72.3 ± 11.3, 68.3 ± 8.4 and 77.1 ± 6.1g. Significantly reduced growth (BW 61.8 ± 6.8 g) was observed in HYP group. It is evident from present study that there are differences between pure bred Arctic char and hybrid in relation to hyperoxic regimes. The study was supported by projects 'CENAKVA' (No. CZ.1.05/2.1.00/01.0024), 'CENAKVA II' (No. LO1205 under the NPU I program), NAZV (QJ1510077) and GAJU (No. 060/2016/Z).

Keywords: recirculating aquaculture systems, Salmonidae, hyperoxia, abiotic factors

Procedia PDF Downloads 182
1451 The Effect of Metal-Organic Framework Pore Size to Hydrogen Generation of Ammonia Borane via Nanoconfinement

Authors: Jing-Yang Chung, Chi-Wei Liao, Jing Li, Bor Kae Chang, Cheng-Yu Wang

Abstract:

Chemical hydride ammonia borane (AB, NH3BH3) draws attentions to hydrogen energy researches for its high theoretical gravimetrical capacity (19.6 wt%). Nevertheless, the elevated AB decomposition temperatures (Td) and unwanted byproducts are main hurdles in practical application. It was reported that the byproducts and Td can be reduced with nanoconfinement technique, in which AB molecules are confined in porous materials, such as porous carbon, zeolite, metal-organic frameworks (MOFs), etc. Although nanoconfinement empirically shows effectiveness on hydrogen generation temperature reduction in AB, the theoretical mechanism is debatable. Low Td was reported in AB@IRMOF-1 (Zn4O(BDC)3, BDC = benzenedicarboxylate), where Zn atoms form closed metal clusters secondary building unit (SBU) with no exposed active sites. Other than nanosized hydride, it was also observed that catalyst addition facilitates AB decomposition in the composite of Li-catalyzed carbon CMK-3, MOF JUC-32-Y with exposed Y3+, etc. It is believed that nanosized AB is critical for lowering Td, while active sites eliminate byproducts. Nonetheless, some researchers claimed that it is the catalytic sites that are the critical factor to reduce Td, instead of the hydride size. The group physically ground AB with ZIF-8 (zeolitic imidazolate frameworks, (Zn(2-methylimidazolate)2)), and found similar reduced Td phenomenon, even though AB molecules were not ‘confined’ or forming nanoparticles by physical hand grinding. It shows the catalytic reaction, not nanoconfinement, leads to AB dehydrogenation promotion. In this research, we explored the possible criteria of hydrogen production temperature from nanoconfined AB in MOFs with different pore sizes and active sites. MOFs with metal SBU such as Zn (IRMOF), Zr (UiO), and Al (MIL-53), accompanying with various organic ligands (BDC and BPDC; BPDC = biphenyldicarboxylate) were modified with AB. Excess MOFs were used for AB size constrained in micropores estimated by revisiting Horvath-Kawazoe model. AB dissolved in methanol was added to MOFs crystalline with MOF pore volume to AB ratio 4:1, and the slurry was dried under vacuum to collect AB@MOF powders. With TPD-MS (temperature programmed desorption with mass spectroscopy), we observed Td was reduced with smaller MOF pores. For example, it was reduced from 100°C to 64°C when MOF micropore ~1 nm, while ~90°C with pore size up to 5 nm. The behavior of Td as a function of AB crystalline radius obeys thermodynamics when the Gibbs free energy of AB decomposition is zero, and no obvious correlation with metal type was observed. In conclusion, we discovered Td of AB is proportional to the reciprocal of MOF pore size, possibly stronger than the effect of active sites.

Keywords: ammonia borane, chemical hydride, metal-organic framework, nanoconfinement

Procedia PDF Downloads 187
1450 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States

Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh

Abstract:

Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.

Keywords: bistability, diabetes, feedback and crosstalk, obesity

Procedia PDF Downloads 275
1449 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria

Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale

Abstract:

The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.

Keywords: allocative efficiency, DEA, Tobit regression, tuber crop

Procedia PDF Downloads 289
1448 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 217
1447 Kawasaki Disease in a Two Months Kuwaiti Girl: A Case Report ‎and Literature Review.‎

Authors: Hanan Bin Nakhi, Asaad M. Albadrawi, Maged Al Shahat, ‎Entesar Mandani

Abstract:

Background:‎ Kawasaki disease (KD) is one of the most common vasculitis of childhood. ‎It is considered the leading cause of acquired heart disease in children. The ‎peak age of occurrence is 6 to 24 months, with 80% of affected children being ‎less than 5 years old. There are only a few reports of KD in infants younger ‎than 6 months. Infants had a higher incidence of atypical KD and of coronary ‎artery complications. This case report from Kuwait will reinforce considering ‎atypical KD in case of sepsis like condition with negative cultures and ‎unresponding to systemic antibiotics. Early diagnosis allows early treatment ‎with intravenous immune globulin (IVIG) and so decreases the incidence of ‎cardiac aneurysm.‎ Case Report:‎ A 2 month old female infant, product of full term normal delivery to ‎consanguineous parents, presented with fever and poor feeding. She was ‎admitted and treated as urinary tract infection as her urine routine revealed ‎pyurea. The baby continued to have persistent fever and hypoactivity inspite ‎of using intravenous antibiotics. Latter, she developed non purulent ‎conjunctivitis, skin mottling, oedema of the face / lower limb and was treated ‎in intensive care unit as a case of septic shock. In spite of her partial general ‎improvement, she continued to look unwell, hypoactive and had persistent ‎fever. Septic work up, metabolic, and immunologic screen were negative. KD ‎was suspected when the baby developed polymorphic erythematous rash and ‎noticed to have peeling of skin at perianal area and periangular area of the ‎fingers of the hand and feet. IVIG was given in dose of 2 gm/kg/day in single ‎dose and aspirin 100 mg/kg/day in four divided doses. The girl showed marked ‎clinical improvement. The fever subsided dramatically and the level acute ‎phase reactant markedly decreased but the platelets count increased to ‎‎1600000/mm3. Echo cardiography showed mild dilatation of mid right ‎coronary artery. Aspirin was continued in a dose of 5 mg/kg/d till repeating ‎cardiac echo. ‎Conclusion:‎ A high index of suspicion of KD must be maintained in young infants with ‎prolonged unexplained fever. Accepted criteria should be less restrictive to ‎allow early diagnosis of a typical KD in infants less than 6 months of age. ‎Timely appropriate treatment with IVIG is essential to avoid severe coronary ‎sequels.‎

Keywords: Kawasaki disease, atypical Kawasaki disease, infantile Kawasaki disease, hypo activity‎ ‎

Procedia PDF Downloads 319
1446 The Impression of Adaptive Capacity of the Rural Community in the Indian Himalayan Region: A Way Forward for Sustainable Livelihood Development

Authors: Rommila Chandra, Harshika Choudhary

Abstract:

The value of integrated, participatory, and community based sustainable development strategies is eminent, but in practice, it still remains fragmentary and often leads to short-lived results. Despite the global presence of climate change, its impacts are felt differently by different communities based on their vulnerability. The developing countries have the low adaptive capacity and high dependence on environmental variables, making them highly susceptible to outmigration and poverty. We need to understand how to enable these approaches, taking into account the various governmental and non-governmental stakeholders functioning at different levels, to deliver long-term socio-economic and environmental well-being of local communities. The research assessed the financial and natural vulnerability of Himalayan networks, focusing on their potential to adapt to various changes, through accessing their perceived reactions and local knowledge. The evaluation was conducted by testing indices for vulnerability, with a major focus on indicators for adaptive capacity. Data for the analysis were collected from the villages around Govind National Park and Wildlife Sanctuary, located in the Indian Himalayan Region. The villages were stratified on the basis of connectivity via road, thus giving two kinds of human settlements connected and isolated. The study focused on understanding the complex relationship between outmigration and the socio-cultural sentiments of local people to not abandon their land, assessing their adaptive capacity for livelihood opportunities, and exploring their contribution that integrated participatory methodologies can play in delivering sustainable development. The result showed that the villages having better road connectivity, access to market, and basic amenities like health and education have a better understanding about the climatic shift, natural hazards, and a higher adaptive capacity for income generation in comparison to the isolated settlements in the hills. The participatory approach towards environmental conservation and sustainable use of natural resources were seen more towards the far-flung villages. The study helped to reduce the gap between local understanding and government policies by highlighting the ongoing adaptive practices and suggesting precautionary strategies for the community studied based on their local conditions, which differ on the basis of connectivity and state of development. Adaptive capacity in this study has been taken as the externally driven potential of different parameters, leading to a decrease in outmigration and upliftment of the human environment that could lead to sustainable livelihood development in the rural areas of Himalayas.

Keywords: adaptive capacity, Indian Himalayan region, participatory, sustainable livelihood development

Procedia PDF Downloads 118
1445 Management Problems in a Patient With Long-term Undiagnosed Permanent Hypoparathyroidism

Authors: Babarina Maria, Andropova Margarita

Abstract:

Introduction: Hypoparathyroidism (HypoPT) is a rare endocrine disorder with an estimated prevalence of 0.25 per 1000 individuals. The most common cause of HypoPT is the loss of active parathyroid tissue following thyroid or parathyroid surgery. Sometimes permanent postoperative HypoPT occures, manifested by hypocalcemia in combination with low levels of PTH during 6 months or more after surgery. Cognitive impairments in patients with hypocalcemia due to chronic HypoPT are observed, and this can lead to problems and challenges in everyday living: memory loss and impaired concentration, that may be the cause of poor compliance. Clinical case: Patient K., 66 years old, underwent thyroidectomy in 2013 (at the age of 55) because of papillary thyroid cancer T1NxMx, histopathology findings confirmed the diagnosis. 5 years after the surgery, she was followed up on an outpatient basis, TSH levelsonly were monitored, and the dose of levothyroxine was adjusted. In 2018 due to, increasing complaints include tingling and cramps in the arms and legs, memory loss, sleep disorder, fatigue, anxiety, hair loss, muscle pain, tachycardia, positive Chvostek, and Trousseau signs were diagnosed during examination, also in blood analyses: total Ca 1.86 mmol/l (2.15-2.55), Ca++ 0.96 mmol/l (1.12-1.3), P 1.55 mmol/l (0.74-1.52), Mg 0.79 mmol/l (0.66-1.07) - chronic postoperative HypoPT was diagnosed. Therapy was initiated: alfacalcidol 0.5 mcg per day, calcium carbonate 2000 mg per day, cholecalciferol 1000 IU per day, magnesium orotate 3000 mg per day. During the case follow-up, hypocalcemia, hyperphosphatemia persisted, hypercalciuria15.7 mmol/day (2.5-6.5) was diagnosed. Dietary recommendations were given because of the high content of phosphorus rich foods, and therapy was adjusted: the dose of alfacalcidol was increased to 2.5 mcg per day, and the dose of calcium carbonate was reduced to 1500 mg per day. As part of the screening for complications of hypoPT, data for cataracts, Fahr syndrome, nephrocalcinosis, and kidney stone disease were not obtained. However, HypoPT compensation was not achieved, and therefore hydrochlorothiazide 25 mg was initiated, the dose of alfacalcidol was increased to 3 mcg per day, calcium carbonate to 3000 mg per day, magnesium orotate and cholecalciferol were continued at the same doses. Therapeutic goals were achieved: calcium phosphate product <4.4 mmol2/l2, there were no episodes of hypercalcemia, twenty-four-hour urinary calcium excretion was significantly reduced. Conclusion: Timely prescription, careful explanation of drugs usage rules, and monitoring and maintaining blood and urine parameters within the target contribute to the prevention of HypoPT complications development and life-threatening events.

Keywords: hypoparathyroidism, hypocalcemia, hyperphosphatemia, hypercalciuria

Procedia PDF Downloads 108
1444 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 66
1443 The Taiwan Environmental Impact Assessment Act Contributes to the Water Resources Saving

Authors: Feng-Ming Fan, Xiu-Hui Wen

Abstract:

Shortage of water resources is a crucial problem to be solved in Taiwan. However, lack of effective and mandatory regulation on water recovery and recycling leads to no effective water resource controls currently. Although existing legislation sets standards regarding water recovery, implementation and enforcement of legislation are facing challenges. In order to break through the dilemma, this study aims to find enforcement tools, improve inspection skills, develop an inspection system, to achieve sustainable development of precious water resources. The Taiwan Environmental Impact Assessment Act (EIA Act) was announced on 1994. The aim of EIA Act is to protect the environment by preventing and mitigating the adverse impact of development activity on the environment. During the EIA process, we can set standards that require enterprises to reach a certain percentage of water recycling based on different case characteristics, to promote sewage source reduction and water saving benefits. Next, we have to inspect how the enterprises handle their waste water and perform water recovery based on environmental assessment commitments, for the purpose of reviewing and measuring the implementation efficiency of water recycling and reuse, an eco-friendly measure. We invited leading experts in related fields to provide lecture on water recycling, strengthen law enforcement officials’ inspection knowledge, and write inspection reference manual to be used as basis of enforcement. Then we finalized the manual by reaching mutual agreement between the experts and relevant agencies. We then inspected 65 high-tech companies whose daily water consumption is over 1,000 tons individually, located at 3 science parks, set up by Ministry of Science and Technology. Great achievement on water recycling was achieved at an amount of 400 million tons per year, equivalent to 2.5 months water usage for general public in Taiwan. The amount is equal to 710 billion bottles of 600 ml cola, 170 thousand international standard swimming pools of 2,500 tons, irrigation water applied to 40 thousand hectares of rice fields, or 1.7 Taipei Feitsui Reservoir of reservoir storage. This study demonstrated promoting effects of environmental impact assessment commitments on water recycling, and therefore water resource sustainable development. It also confirms the value of EIA Act for environmental protection. Economic development should go hand in hand with environmental protection, and it’s a mainstream. It clearly shows the EIA regulation can minimize harmful effects caused by development activity to the environment, as well as pursuit water resources sustainable development.

Keywords: the environmental impact assessment act, water recycling environmental assessment commitment, water resource sustainable development, water recycling, water reuse

Procedia PDF Downloads 247
1442 The Surgical Trainee Perception of the Operating Room Educational Environment

Authors: Neal Rupani

Abstract:

Background: A surgical trainee has limited learning opportunities in the operating room in order to gain an ever-increasing standard of surgical skill, competency, and proficiency. These opportunities continue to decline due to numerous factors such as the European Working Time Directive and increasing requirement for service provision. It is therefore imperative to obtain the highest educational value from each educational opportunity. A measure that has yet to be validated in England on surgical trainees called the Operating Room Educational Environment Measure (OREEM) has been developed to identify and evaluate each component of the educational environment with a view to steer future change in optimising educational events in theatre. Aims: The aims of the study are to assess the reliability of the OREEM within England and to evaluate the surgical trainee’s objective perspective of the current operating room educational environment within one region within England. Methods: Using a quantitative study approach, data was collected over one month from surgical trainees within Health Education Thames Valley (Oxford) using an online questionnaire consisting of demographic data, the OREEM, a global satisfaction score. Results: 140 surgical trainees were invited to the study, with an online response of 54 participants (response rate = 38.6%). The OREEM was shown to have good internal consistency (α = 0.906, variables = 40) and unidimensionality, along with all four of its subgroups. The mean OREEM score was 79.16%. The areas highlighted for improvement predominantly focused on improving learning opportunities (average subscale score = 72.9%) and conducting pre- and post-operative teaching (average score = 70.4%). The trainee perception is most satisfactory for the level of supervision and workload (average subscale score = 82.87%). There was no differences found between gender (U = 191.5, p = 0.535) or type of hospital (U = 258.0, p = 0.099), but the learning environment was favoured towards senior trainees (U = 223.5, p = 0.017). There was strong correlation between OREEM and the global satisfaction score (r = 0.755, p<0.001). Conclusions: The OREEM was shown to be reliable in measuring the educational environment in the operating room. This can be used to identify potentially modifiable components for improvement and as an audit tool to ensure high standards are being met. The current perception of the education environment in Health Education Thames Valley is satisfactory, and modifiable internal and external factors such as reducing service provision requirements, empowering trainees to plan lists, creating a team-working ethic between all personnel, and using tools that maximise learning from each operation have been identified to improve learning in the future. There is a favourable attitude to use of such improvement tools, especially for those currently dissatisfied.

Keywords: education environment, surgery, post-graduate education, OREEM

Procedia PDF Downloads 184
1441 The Church of San Paolo in Ferrara, Restoration and Accessibility

Authors: Benedetta Caglioti

Abstract:

The ecclesiastical complex of San Paolo in Ferrara represents a monument of great historical, religious and architectural importance. Its long and articulated story, over time, is already manifested by the mere reading of its planimetric and altimetric configuration, apparently unitary but, in reality, marked by modifications and repeated additions, even of high quality. It follows, in terms of protection, restoration and enhancement, a commitment of due respect for how the ancient building was built and enriched over its centuries of life. Hence a rigorous methodological approach, while being aware of the fact that every monument, in order to live and make use of the indispensable maintenance, must always be enjoyed and visited, therefore it must enjoy, in the right measure and compatibly with its nature, the possibility of improvements and functional, distributive, technological adjustments and related to the safety of people and things. The methodological approach substantiates the different elements of the project (such as distribution functionality, safety, structural solidity, environmental comfort, the character of the site, building and urban planning regulations, financial resources and materials, the same organization methods of the construction site) through the guiding principles of restoration, defined for a long time: the 'minimum intervention,' the 'recognisability' or 'distinguishability' of old and new, the Physico-chemical and figurative 'compatibility,' the 'durability' and the, at least potential, 'reversibility' of what is done, leading to the definition of appropriate "critical choices." The project tackles, together with the strictly functional ones, also the directly conservative and restoration issues, of a static, structural and material technology nature, with special attention to precious architectural surfaces, In order to ensure the best architectural quality through conscious enhancement, the project involves a redistribution of the interior and service spaces, an accurate lighting system inside and outside the church and a reorganization of the adjacent urban space. The reorganization of the interior is designed with particular attention to the issue of accessibility for people with disabilities. To accompany the community to regain possession of the use of the church's own space, already in its construction phase, the project proposal has hypothesized a permeability and flexibility in the management of the works such as to allow the perception of the found Monument to gradually become more and more familiar at the citizenship. Once the interventions have been completed, it is expected that the Church of San Paolo, second in importance only to the Cathedral, from which it is a few steps away, will be inserted in an already existing circuit of use of the city which over the years has systematized the different aspects of culture, the environment and tourism for the creation of greater awareness in the perception of what Ferrara can offer in cultural terms.

Keywords: conservation, accessibility, regeneration, urban space

Procedia PDF Downloads 108
1440 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 130
1439 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 163