Search results for: heat loss
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6052

Search results for: heat loss

1612 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication

Authors: Vedant Janapaty

Abstract:

Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.

Keywords: estuary, remote sensing, machine learning, Fourier transform

Procedia PDF Downloads 81
1611 Design and Development of Tandem Dynamometer for Testing and Validation of Motor Performance Parameters

Authors: Vedansh More, Lalatendu Bal, Ronak Panchal, Atharva Kulkarni

Abstract:

The project aims at developing a cost-effective test bench capable of testing and validating the complete powertrain package of an electric vehicle. Emrax 228 high voltage synchronous motor was selected as the prime mover for study. A tandem type dynamometer comprising of two loading methods; inertial, using standard inertia rollers and absorptive, using a separately excited DC generator with resistive coils was developed. The absorptive loading of the prime mover was achieved by implementing a converter circuit through which duty of the input field voltage level was controlled. This control was efficacious in changing the magnetic flux and hence the generated voltage which was ultimately dropped across resistive coils assembled in a load bank with all parallel configuration. The prime mover and loading elements were connected via a chain drive with a 2:1 reduction ratio which allows flexibility in placement of components and a relaxed rating of the DC generator. The development will aid in determination of essential characteristics like torque-RPM, power-RPM, torque factor, RPM factor, heat loads of devices and battery pack state of charge efficiency but also provides a significant financial advantage over existing versions of dynamometers with its cost-effective solution.

Keywords: absorptive load, chain drive, chordal action, DC generator, dynamometer, electric vehicle, inertia rollers, load bank, powertrain, pulse width modulation, reduction ratio, road load, testbench

Procedia PDF Downloads 200
1610 Causes and Impacts of Rework Costs in Construction Projects

Authors: Muhammad Ejaz1

Abstract:

Rework has been defined as: "The unnecessary effort of re-doing a process or activity that was incorrectly implemented the first time." A great threat to the construction industry is rework. By and large due attention has not been given to avoid the causes of reworks, resulting time and cost over runs, in civil engineering projects. Besides these direct consequences, there might also be indirect consequences, such as stress, de-motivation or loss of future clients. When delivered products do not meet the requirements or expectations, work often has to be redone. Rework occurs in various phases of the construction process or in various divisions of a company. Rework can occur on the construction site or in a management department due to for example bad materials management. Rework can also have internal or external origins. Changes in clients’ expectations are an example of an external factor that might lead to rework. Rework can cause many costs to be higher than calculated at the start of the project. Rework events can have many different origins and for this research they have been categorized into four categories; changes, errors, omissions, and damages. The research showed that the major source of reworks were non professional attitude from technical hands and ignorance of total quality management principals by stakeholders. It also revealed that sources of reworks have not major differences among project categories. The causes were further analyzed by interviewing employees. Based on existing literature an extensive list of rework causes was made and during the interviews the interviewees were asked to confirm or deny statements regarding rework causes. The causes that were most frequently confirmed can be grouped into the understanding categories. 56% (max) of the causes are change-related, 30% (max) is error-related and 18% (max) falls into another category. Therefore, by recognizing above mentioned factors, reworks can be reduced to a great extent.

Keywords: total quality management, construction industry, cost overruns, rework, material management, client’s expectations

Procedia PDF Downloads 272
1609 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations

Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik

Abstract:

The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.

Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor

Procedia PDF Downloads 150
1608 Comparative Assessment of hCG with Estrogen in Increasing Pregnancy Rate in Mixed Parity Buffaloes

Authors: Sanan Raza, Tariq Abbas, Ahmad Yar Qamar, Muhammad Younus, Hamayun Khan, Mujahid Zafar

Abstract:

Water Buffaloes contribute significantly in Asian agriculture. The objective of this study was to evaluate the efficacy of two synchronization protocols in enhancing pregnancy rate in 105 mixed parity buffaloes particularly in summer season. Buffaloes are seasonal breeders showing more fertility from October to January in subtropical environment of Pakistan. In current study 105 lactating buffaloes of mixed parity were used having normal estrous cycle, age ranging 5-9 years, weighing between 400-650 kg, BCS 4 ± 0.5 (1-5) and lactation varied from first to 5th. Experimental animals were divided into three groups based on corpus leteummorphometry. Morphometry of C.L was done using rectal population and ultrasonography. All animals were injected 25mg of PGi.m. (Cloprostenol). In Group-1 (n=35) hCG was administered at follicular size of 10mm having scanned after detection of heat. Similarly Group-2 (n=35) received 25 mg EB i.m (Estradiol Benzoate) after confirmation of follicular size of 10mm with ultrasound. Likewise, buffaloes of Group-3 (n=35) were administered normal saline respectively using as control. All buffaloes of three groups were inseminated after 12h of hCG, EB, and normal saline administration respectively. Pregnancy was assessed by ultrasound at 18th and 45th day post insemination. Pregnancy rates at 18th day were 38.2%, 34.5%, and 27.3% for G1, G2, and G3 respectively indicating that hCG and EB administered groups have no difference in results except control group having lower conception rate than both groups respectively. Similarly on 42nd day, these were 40.4%, 32.7% for G1 and G2 which are significantly higher than G3= 26.6 (control Group). Also, hCG and EB treated buffaloes have more probability of pregnancy than control group. Based on the findings of current study, it seems reasonable that the use of hCG and EB has been associated with improving pregnancy rates in non-breeding season of buffaloes.

Keywords: buffalo, hCG, EB, pregnancy rate, follicle, insemination

Procedia PDF Downloads 781
1607 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs

Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou

Abstract:

We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.

Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties

Procedia PDF Downloads 411
1606 Comparing the Embodied Carbon Impacts of a Passive House with the BC Energy Step Code Using Life Cycle Assessment

Authors: Lorena Polovina, Maddy Kennedy-Parrott, Mohammad Fakoor

Abstract:

The construction industry accounts for approximately 40% of total GHG emissions worldwide. In order to limit global warming to 1.5 degrees Celsius, ambitious reductions in the carbon intensity of our buildings are crucial. Passive House presents an opportunity to reduce operational carbon by as much as 90% compared to a traditional building through improving thermal insulation, limiting thermal bridging, increasing airtightness and heat recovery. Up until recently, Passive House design was mainly concerned with meeting the energy demands without considering embodied carbon. As buildings become more energy-efficient, embodied carbon becomes more significant. The main objective of this research is to calculate the embodied carbon impact of a Passive House and compare it with the BC Energy Step Code (ESC). British Columbia is committed to increasing the energy efficiency of buildings through the ESC, which is targeting net-zero energy-ready buildings by 2032. However, there is a knowledge gap in the embodied carbon impacts of more energy-efficient buildings, in particular Part 3 construction. In this case study, life cycle assessments (LCA) are performed on Part 3, a multi-unit residential building in Victoria, BC. The actual building is not constructed to the Passive House standard; however, the building envelope and mechanical systems are designed to comply with the Passive house criteria, as well as Steps 1 and 4 of the BC Energy Step Code (ESC) for comparison. OneClick LCA is used to perform the LCA of the case studies. Several strategies are also proposed to minimize the total carbon emissions of the building. The assumption is that there will not be significant differences in embodied carbon between a Passive House and a Step 4 building due to the building envelope.

Keywords: embodied carbon, energy modeling, energy step code, life cycle assessment

Procedia PDF Downloads 124
1605 Effects of High-Protein, Low-Energy Diet on Body Composition in Overweight and Obese Adults: A Clinical Trial

Authors: Makan Cheraghpour, Seyed Ahmad Hosseini, Damoon Ashtary-Larky, Saeed Shirali, Matin Ghanavati, Meysam Alipour

Abstract:

Background: In addition to reducing body weight, the low-calorie diets can reduce the lean body mass. It is hypothesized that in addition to reducing the body weight, the low-calorie diets can maintain the lean body mass. So, the current study aimed at evaluating the effects of high-protein diet with calorie restriction on body composition in overweight and obese individuals. Methods: 36 obese and overweight subjects were divided randomly into two groups. The first group received a normal-protein, low-energy diet (RDA), and the second group received a high-protein, low-energy diet (2×RDA). The anthropometric indices including height, weight, body mass index, body fat mass, fat free mass, and body fat percentage were evaluated before and after the study. Results: A significant reduction was observed in anthropometric indices in both groups (high-protein, low-energy diets and normal-protein, low-energy diets). In addition, more reduction in fat free mass was observed in the normal-protein, low-energy diet group compared to the high -protein, low-energy diet group. In other the anthropometric indices, significant differences were not observed between the two groups. Conclusion: Independently of the type of diet, low-calorie diet can improve the anthropometric indices, but during a weight loss, high-protein diet can help the fat free mass to be maintained.

Keywords: diet, high-protein, body mass index, body fat percentage

Procedia PDF Downloads 282
1604 Identifying Issues of Corporate Governance and the Effect on Organizational Performance

Authors: Abiodun Oluwaseun Ibude

Abstract:

Every now and then we hear of companies closing down their operations due to unethical practices like an overstatement of company’s balance sheet, concealing company’s debt, embezzlement of company’s fund, declaring false profit and so on. This has led to the liquidation of companies and the loss of investments of shareholders as well as the interest of other stakeholders. As a result of these ugly trends, there is need to put in place a formidable mechanism that will ensure that business activities are conducted in a healthy manner. It should also promote good ethics as well as ensure that the interest of stakeholders and the objectives of any organization is achieved within the confines of the law; wherein law exists to provide criminal penalties for falsification of documents and for conducting other irregularities. Based on the foregoing, it becomes imperative to ensure that steps are taken to stop this menace and face the challenges ahead. This calls for the practice of good governance. The purpose of this study is to identify various components of corporate governance and determine the impact of it on the performance of established organizations. A survey method with the use of questionnaire was applied in collecting data useful for this study which were later analyzed using correlation co-efficiency statistical tools in generating finding, making a conclusion, and necessary recommendation. From the research conducted, it was discovered that there are systems within organizations apart from regulatory agencies that ensure effective control of activities, promote accountability, and operational efficiency. However, some members of organizations fail to explore the usage of corporate governance and impact negatively of an organization’s performance. In conclusion, good corporate governance will not be achieved unless there is openness, honesty, transparency, accountability, and fairness.

Keywords: corporate governance, formidable mechanism, company’s balance sheet, stakeholders

Procedia PDF Downloads 97
1603 Determining Design Parameters for Sizing of Hydronic Heating Systems in Concrete Thermally Activated Building Systems

Authors: Rahmat Ali, Inamullah Khan, Amjad Naseer, Abid A. Shah

Abstract:

Hydronic Heating and Cooling systems in concrete slab based buildings are increasingly becoming a popular substitute to conventional heating and cooling systems. In exploring the materials, techniques employed, and their relative performance measures, a fair bit of uncertainty exists. This research has identified the simplest method of determining the thermal field of a single hydronic pipe when acting as a part of a concrete slab, based on which the spacing and positioning of pipes for a best thermal performance and surface temperature control are determined. The pipe material chosen is the commonly used PEX pipe, which has an all-around performance and thermal characteristics with a thermal conductivity of 0.5W/mK. Concrete Test samples were constructed and their thermal fields tested under varying input conditions. Temperature sensing devices were embedded into the wet concrete at fixed distances from the pipe and other touch sensing temperature devices were employed for determining the extent of the thermal field and validation studies. In the first stage, it was found that the temperature along a specific distance was the same and that heat dissipation occurred in well-defined layers. The temperature obtained in concrete was then related to the different control parameters including water supply temperature. From the results, the temperature of water required for a specific temperature rise in concrete is determined. The thermally effective area is also determined which is then used to calculate the pipe spacing and positioning for the desired level of thermal comfort.

Keywords: thermally activated building systems, concrete slab temperature, thermal field, energy efficiency, thermal comfort, pipe spacing

Procedia PDF Downloads 310
1602 Network Pharmacological Evaluation of Holy Basil Bioactive Phytochemicals for Identifying Novel Potential Inhibitors Against Neurodegenerative Disorder

Authors: Bhuvanesh Baniya

Abstract:

Alzheimer disease is illnesses that are responsible for neuronal cell death and resulting in lifelong cognitive problems. Due to their unclear mechanism, there are no effective drugs available for the treatment. For a long time, herbal drugs have been used as a role model in the field of the drug discovery process. Holy basil in the Indian medicinal system (Ayurveda) is used for several neuronal disorders like insomnia and memory loss for decades. This study aims to identify active components of holy basil as potential inhibitors for the treatment of Alzheimer disease. To fulfill this objective, the Network pharmacology approach, gene ontology, pharmacokinetics analysis, molecular docking, and molecular dynamics simulation (MDS) studies were performed. A total of 7 active components in holy basil, 12 predicted neurodegenerative targets of holy basil, and 8063 Alzheimer-related targets were identified from different databases. The network analysis showed that the top ten targets APP, EGFR, MAPK1, ESR1, HSPA4, PRKCD, MAPK3, ABL1, JUN, and GSK3B were found as significant target related to Alzheimer disease. On the basis of gene ontology and topology analysis results, APP was found as a significant target related to Alzheimer’s disease pathways. Further, the molecular docking results to found that various compounds showed the best binding affinities. Further, MDS top results suggested could be used as potential inhibitors against APP protein and could be useful for the treatment of Alzheimer’s disease.

Keywords: holy basil, network pharmacology, neurodegeneration, active phytochemicals, molecular docking and simulation

Procedia PDF Downloads 79
1601 Rapid Classification of Soft Rot Enterobacteriaceae Phyto-Pathogens Pectobacterium and Dickeya Spp. Using Infrared Spectroscopy and Machine Learning

Authors: George Abu-Aqil, Leah Tsror, Elad Shufan, Shaul Mordechai, Mahmoud Huleihel, Ahmad Salman

Abstract:

Pectobacterium and Dickeya spp which negatively affect a wide range of crops are the main causes of the aggressive diseases of agricultural crops. These aggressive diseases are responsible for a huge economic loss in agriculture including a severe decrease in the quality of the stored vegetables and fruits. Therefore, it is important to detect these pathogenic bacteria at their early stages of infection to control their spread and consequently reduce the economic losses. In addition, early detection is vital for producing non-infected propagative material for future generations. The currently used molecular techniques for the identification of these bacteria at the strain level are expensive and laborious. Other techniques require a long time of ~48 h for detection. Thus, there is a clear need for rapid, non-expensive, accurate and reliable techniques for early detection of these bacteria. In this study, infrared spectroscopy, which is a well-known technique with all its features, was used for rapid detection of Pectobacterium and Dickeya spp. at the strain level. The bacteria were isolated from potato plants and tubers with soft rot symptoms and measured by infrared spectroscopy. The obtained spectra were analyzed using different machine learning algorithms. The performances of our approach for taxonomic classification among the bacterial samples were evaluated in terms of success rates. The success rates for the correct classification of the genus, species and strain levels were ~100%, 95.2% and 92.6% respectively.

Keywords: soft rot enterobacteriaceae (SRE), pectobacterium, dickeya, plant infections, potato, solanum tuberosum, infrared spectroscopy, machine learning

Procedia PDF Downloads 80
1600 Production of Hydrophilic PVC Surfaces with Microwave Treatment for its Separation from Mixed Plastics by Froth Floatation

Authors: Srinivasa Reddy Mallampati, Chi-Hyeon Lee, Nguyen Thanh Truc, Byeong-Kyu Lee

Abstract:

Organic polymeric materials (plastics) are widely used in our daily life and various industrial fields. The separation of waste plastics is important for its feedstock and mechanical recycling. One of the major problems in incineration for thermal recycling or heat melting for material recycling is the polyvinyl chloride (PVC) contained in waste plastics. This is due to the production of hydrogen chloride, chlorine gas, dioxins, and furans originated from PVC. Therefore, the separation of PVC from waste plastics is necessary before recycling. The separation of heavy polymers (PVC 1.42, PMMA 1.12, PC 1.22 and PET 1.27 g/cm3 ) from light ones (PE and PP 0.99 g/cm3) can be achieved on the basis of their density. However it is difficult to separate PVC from other heavy polymers basis of density. There are no simple and inexpensive techniques to separate PVC from others. If hydrophobic the PVC surface is selectively changed into hydrophilic, where other polymers still have hydrophobic surface, flotation process can separate PVC from others. In the present study, the selective surface hydrophilization of polyvinyl chloride (PVC) by microwave treatment after alkaline/acid washing and with activated carbon was studied as the pre-treatment of its separation by the following froth flotation. In presence of activated carbon as absorbent, the microwave treatment could selectively increase the hydrophilicity of the PVC surface (i.e. PVC contact angle decreased about 19o) among other plastics mixture. At this stage, 100% PVC separation from other plastics could be achieved by the combination of the pre- microwave treatment with activated carbon and the following froth floatation. The hydrophilization of PVC by surface analysis would be due to the hydrophilic groups produced by microwave treatment with activated carbon. The effect of optimum condition and detailed mechanism onto separation efficiency in the froth floatation was also investigated.

Keywords: Hydrophilic, PVC, contact angle, additive, microwave, froth floatation, waste plastics

Procedia PDF Downloads 604
1599 The Effects of Changes in Accounting Standards on Loan Loss Provisions (LLP) as Earnings Management Device: Evidence from Malaysia and Nigeria Banks (Part I)

Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri

Abstract:

In view of dearth of studies on changes in accounting standards and banks’ earnings management particularly in the context of emerging economies, and the recent Malaysia and Nigeria change from their respective local GAAP to IFRS, this study deemed it overwhelming to investigate the effects of the switch on banks’ earnings management focusing on LLP as the manipulative device. This study employed judgmental sampling to select twenty eight banks- eight Malaysia and twenty Nigeria banks as sample covering period 2008-2013. To provide an empirical research setting in pursuant of the objective of this study, the study period is further partitioned into pre (2008, 2009, 2010) and post (2011, 2012, 2013) IFRS adoption periods. This study consistent with previous studies models a LLP regression model to investigate specific discretionary accruals of banks. Findings suggest that Malaysia and Nigeria banks individually use LLP to manage reported earnings more prior to IFRS implementation. Comparative overall results evidenced that the pre IFRS adoption or domestic GAAP era for both Malaysia and Nigeria sample banks is associated with higher prevalent earnings management through LLP than the corresponding post IFRS adoption era in diverse magnitude but in favour of Malaysia banks for both periods. With results demonstrating that IFRS adoption is linked to lower earnings management via LLP, this study therefore recommends the global adoption of IFRS as reporting framework. This study also endorses that Nigeria banks embrace and borrow a leaf from Malaysia banks good corporate governance practices.

Keywords: accounting standards, IFRS, FRS, SAS, LLP, earnings management

Procedia PDF Downloads 381
1598 Implications of Fulani Herders/Farmers Conflict on the Socio-Economic Development of Nigeria (2000-2018)

Authors: Larry E. Udu, Joseph N. Edeh

Abstract:

Unarguably, the land is an indispensable factor of production and has been instrumental to numerous conflicts between crop farmers and herders in Nigeria. The conflicts pose a grave challenge to life and property, food security and ultimately to sustainable socio-economic development of the nation. The paper examines the causes of the Fulani herders/farmers conflicts, particularly in the Middle Belt; numerity of occurrences and extent of damage and their socio-economic implications. Content Analytical Approach was adopted as methodology wherein data was extensively drawn from the secondary source. Findings reveal that major causes of the conflict are attributable to violation of tradition and laws, trespass and cultural factors. Consequently, the numerity of attacks and level of fatality coupled with displacement of farmers, destruction of private and public facilities impacted negatively on farmers output with their attendant socio-economic implications on sustainable livelihood of the people and the nation at large. For instance, Mercy Corps (a Global Humanitarian Organization) in its research, 2013-2016 asserts that a loss of $14billion within 3 years was incurred and if the conflict were resolved, the average affected household could see increase income by at least 64 percent and potentially 210 percent or higher and that states affected by the conflicts lost an average of 47 percent taxes/IGR. The paper therefore recommends strict adherence to grazing laws; platform for dialogue bothering on compromises where necessary and encouragement of cattle farmers to build ranches for their cattle according to international standards.

Keywords: conflict, farmers, herders, Nigeria, socio-economic implications

Procedia PDF Downloads 173
1597 Composition and Distribution of Seabed Marine Litter Along Algerian Coast (Western Mediterranean)

Authors: Ahmed Inal, Samir Rouidi, Samir Bachouche

Abstract:

The present study is focused on the distribution and composition of seafloor marine litter associated to trawlable fishing areas along Algerian coast. The sampling was done with a GOC73 bottom trawl during four (04) demersal resource assessment cruises, respectively, in 2016, 2019, 2021 and 2022, carried out on board BELKACEM GRINE R/V. A total of 254 fishing hauls were sampled for the assessment of marine litter. Hauls were performed between 22 and 600 m of depth, the duration was between 30 and 60 min. All sampling was conducted during daylight. After the haul, marine litter was sorted and split from the catch. Then, according to the basis of the MEDITS protocol, litters were sorted into six different categories (plastic, rubber, metal, wood, glass and natural fiber). Thereafter, all marine litter were counted and weighed separately to the nearest 0.5 g. The results shows that the maximums of marine litter densities in the seafloor of the trawling fishing areas along Algerian coast are, respectively, 1996 item/km2 in 2016, 5164 item/km2 in 2019, 2173 item/km2 in 2021 and 7319 item/km2 in 2022. Thus, the plastic is the most abundant litter, it represent, respectively, 46% of marine litter in 2016, 67% in 2019, 69% in 2021 and 74% in 2022. Regarding the weight of the marine litter, it varies between 0.00 and 103 kg in 2016, between 0.04 and 81 kg in 2019, between 0.00 and 68 Kg in 2021 and between 0.00 and 318 kg in 2022. Thus, the maximum rate of marine litter compared to the total catch approximate, respectively, 66% in 2016, 90% in 2019, 65% in 2021 and 91% in 2022. In fact, the average loss in catch is estimated, respectively, at 7.4% in 2016, 8.4% in 2019, 5.7% in 2021 and 6.4% in 2022. However, the bathymetric and geographical variability had a significant impact on both density and weight of marine litter. Marine litter monitoring program is necessary for offering more solution proposals.

Keywords: composition, distribution, seabed, marine litter, algerian coast

Procedia PDF Downloads 50
1596 Using the Minnesota Multiphasic Personality Inventory-2 and Mini Mental State Examination-2 in Cognitive Behavioral Therapy: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

From a psychological perspective, psychopathology is the area of clinical psychology that has at its core psychological assessment and psychotherapy. In day-to-day clinical practice, psychodiagnosis and psychotherapy are used independently, according to their intended purpose and their specific methods of application. The paper explores how the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Mini Mental State Examination-2 (MMSE-2) psychological tools contribute to enhancing the effectiveness of cognitive behavioral psychotherapy (CBT). This combined approach, psychotherapy in conjunction with assessment of personality and cognitive functions, is illustrated by two cases, a severe depressive episode with psychotic symptoms and a mixed anxiety-depressive disorder. The order in which CBT, MMPI-2, and MMSE-2 were used in the diagnostic and therapeutic process was determined by the particularities of each case. In the first case, the sequence started with psychotherapy, followed by the administration of blue form MMSE-2, MMPI-2, and red form MMSE-2. In the second case, the cognitive screening with blue form MMSE-2 led to a personality assessment using MMPI-2, followed by red form MMSE-2; reapplication of the MMPI-2 due to the invalidation of the first profile, and finally, psychotherapy. The MMPI-2 protocols gathered useful information that directed the steps of therapeutic intervention: a detailed symptom picture of potentially self-destructive thoughts and behaviors otherwise undetected during the interview. The memory loss and poor concentration were confirmed by MMSE-2 cognitive screening. This combined approach, psychotherapy with psychological assessment, aligns with the trend of adaptation of the psychological services to the everyday life of contemporary man and paves the way for deepening and developing the field.

Keywords: assessment, cognitive behavioral psychotherapy, MMPI-2, MMSE-2, psychopathology

Procedia PDF Downloads 307
1595 The Gold Standard Treatment Plan for Vitiligo: A Review on Conventional and Updated Treatment Methods

Authors: Kritin K. Verma, Brian L. Ransdell

Abstract:

White patches are a symptom of vitiligo, a chronic autoimmune dermatological condition that causes a loss of pigmentation in the skin. Vitiligo can cause issues of self-esteem and quality of life while also progressing the development of other autoimmune diseases. Current treatments in allopathy and homeopathy exist; some treatments have been found to be toxic, whereas others have been helpful. Allopathy has seemed to offer several treatment plans, such as phototherapy, skin lightening preparations, immunosuppressive drugs, combined modality therapy, and steroid medications to improve vitiligo. This presentation will review the FDA-approved topical cream, Opzelura, a JAK inhibitor, and its effects on limiting vitiligo progression. Meanwhile, other non-conventional methods, such as Arsenic Sulphuratum Flavum used in homeopathy, will be debunked based on current literature. Most treatments still serve to arrest progression and induce skin repigmentation. Treatment plans may differ between patients due to depigmentation location on the skin. Since there is no gold standard plan for treating patients with vitiligo, the oral presentation will review all topical and systemic pharmacological therapies that fight the depigmentation of the skin and categorize their validity from a systematic review of the literature. Since treatment plans are limited in nature, all treatment methods will be mentioned and an attempt will be made to make a golden standard treatment process for these patients.

Keywords: vitiligo, phototherapy, immunosuppressive drugs, skin lightening preparations, combined modality therapy, arsenic sulphuratum flavum, homeopathy, allopathy, golden standard, Opzelura

Procedia PDF Downloads 65
1594 Comparison of Anterolateral Thigh Flap with or without Acellular Dermal Matrix in Repair of Hypopharyngeal Squamous Cell Carcinoma Defect: A Retrospective Study

Authors: Yaya Gao, Bing Zhong, Yafeng Liu, Fei Chen

Abstract:

Aim: The purpose of this study was to explore the difference between acellular dermal matrix (ADM) combined with anterolateral thigh (ALT) flap and ALT flap alone. Methods: HSCC patients were treated and divided into group A (ALT) and group B (ALT+ADM) between January 2014 and December 2018. We compared and analyzed the intraoperative information and postoperative outcomes of the patients. Results: There were 21 and 17 patients in group A and group B, respectively. The operation time, blood loss, defect size and anastomotic vessel selection showed no significant difference between two groups. The postoperative complications, including wound bleeding (n=0 vs. 1, p=0.459), wound dehiscence (n=0 vs. 1, p=0.459), wound infection (n=5vs.3, p=0.709), pharyngeal fistula (n=5vs.4, p=1.000) and hypoproteinemia (n=11 vs. 12, p=0.326) were comparable between the groups. Dysphagia at 6 months (number of liquid diets=0vs. 0; number of partial tube feedings=1vs. 1; number of total tube feedings=1vs. 0, p=0.655) also showed no significant differences. However, significant differences was observed in dysphagia at 12 months (number of liquid diets=0vs. 0; number of partial tube feedings=3 vs. 1; number of total tube feedings=10vs. 1, p=0.006). Conclusion: For HSCC patients, the use of the ALT flap combined ADM, compared to ALT treatment, showed better swallowing function at 12 months. The ALT flap combined ADM may serve as a safe and feasible alternative for selected HSCC patients.

Keywords: hypopharyngeal squamous cell carcinoma, anterolateral thigh free flap, acellular dermal matrix, reconstruction, dysphagia

Procedia PDF Downloads 61
1593 Removal of Polycyclic Aromatic Hydrocarbons (PAHS) and the Response of Indigenous Bacteria in Highly Contaminated Aged Soil after Persulfate Oxidation

Authors: Yaling Gou, Sucai Yang, Pengwei Qiao

Abstract:

Integrated chemical-biological treatment is an attractive alternative to remove polycyclic aromatic hydrocarbons (PAHs) from contaminated soil; wherein indigenous bacteria is the key factor for the biodegradation of residual PAHs concentrations after the application of chemical oxidation. However, the systematical study on the impact of persulfate (PS) oxidation on indigenous bacteria as well as PAHs removal is still scarce. In this study, the influences of different PS dosages (1%, 3%, 6%, and 10% [w/w]), as well as various activation methods (native iron, H2O2, alkaline, ferrous iron, and heat) on PAHs removal and indigenous bacteria in highly contaminated aged soil were investigated. Apparent degradation of PAHs in the soil treated with PS oxidation was observed, and the removal efficiency of total PAHs in the soil ranged from 38.28% to 79.97%. The removal efficiency of total PAHs in the soil increased with increasing consumption of PS. However, the bacterial abundance in soil was negatively affected following oxidation for all of the treatments added with PS, with bacterial abundance in the soil decreased by 0.89~2.88 orders of magnitude compared to the untreated soil. Moreover, the number of total bacteria in the soil decreased as PS consumption increased. Different PS activation methods and PS dosages exhibited different influences on the bacterial community composition. Bacteria capable of degrading PAHs under anoxic conditions were composed predominantly by Proteobacteria and Firmicutes. The total amount of Proteobacteria and Firmicutes also decreased with increasing consumption of PS. The results of this study provide important insight into the design of PAHs contaminated soil remediation projects.

Keywords: activation method, chemical oxidation, indigenous bacteria, polycyclic aromatic hydrocarbon

Procedia PDF Downloads 101
1592 Heating of the Ions by Electromagnetic Ion Cyclotron (EMIC) Waves Using Magnetospheric Multiscale (MMS) Satellite Observation

Authors: A. A. Abid

Abstract:

The magnetospheric multiscale (MMS) satellite observations in the inner magnetosphere were used to detect the proton band of the electromagnetic ion cyclotron (EMIC) waves on December 14, 2015, which have been significantly contributing to the dynamics of the magnetosphere. It has been examined that the intensity of EMIC waves gradually increases by decreasing the L shell. The waves are triggered by hot proton thermal anisotropy. The low-energy cold protons (ions) can be activated by the EMIC waves when the EMIC wave intensity is high. As a result, these previously invisible protons are now visible. As a result, the EMC waves also excite the helium ions. The EMIC waves, whose frequency in the magnetosphere of the Earth ranges from 0.001 Hz to 5 Hz, have drawn a lot of attention for their ability to carry energy. Since these waves act as a mechanism for the loss of energetic electrons from the Van Allen radiation belt to the atmosphere, therefore, it is necessary to understand how and where they can be produced, as well as the direction of waves along the magnetic field lines. This work examines how the excitation of EMIC waves is affected by the energy of hot proton temperature anisotropy, and It has a minimum resonance energy of 6.9 keV and a range of 7 to 26 keV. On the hot protons, however, the reverse effect can be seen for energies below the minimum resonance energy. It is demonstrated that throughout the energy range of 1 eV to 100 eV, the number density and temperature anisotropy of the protons likewise rise as the intensity of the EMIC waves increases. Key Points: 1. The analysis of EMIC waves produced by hot proton temperature anisotropy using MMS data. 2. The number density and temperature anisotropy of the cold protons increases owing to high-intensity EMIC waves. 3. The cold protons with an energy range of 1-100eV are energized by EMIC waves using the Magnetospheric Multiscale (MMS) satellite not been discussed before

Keywords: EMIC waves, temperature anisotropy of hot protons, energization of the cold proton, magnetospheric multiscale (MMS) satellite observations

Procedia PDF Downloads 89
1591 Design, Synthesis and Evaluation of 4-(Phenylsulfonamido)Benzamide Derivatives as Selective Butyrylcholinesterase Inhibitors

Authors: Sushil Kumar Singh, Ashok Kumar, Ankit Ganeshpurkar, Ravi Singh, Devendra Kumar

Abstract:

In spectrum of neurodegenerative diseases, Alzheimer’s disease (AD) is characterized by the presence of amyloid β plaques and neurofibrillary tangles in the brain. It results in cognitive and memory impairment due to loss of cholinergic neurons, which is considered to be one of the contributing factors. Donepezil, an acetylcholinesterase (AChE) inhibitor which also inhibits butyrylcholinesterase (BuChE) and improves the memory and brain’s cognitive functions, is the most successful and prescribed drug to treat the symptoms of AD. The present work is based on designing of the selective BuChE inhibitors using computational techniques. In this work, machine learning models were trained using classification algorithms followed by screening of diverse chemical library of compounds. The various molecular modelling and simulation techniques were used to obtain the virtual hits. The amide derivatives of 4-(phenylsulfonamido) benzoic acid were synthesized and characterized using 1H & 13C NMR, FTIR and mass spectrometry. The enzyme inhibition assays were performed on equine plasma BuChE and electric eel’s AChE by method developed by Ellman et al. Compounds 31, 34, 37, 42, 49, 52 and 54 were found to be active against equine BuChE. N-(2-chlorophenyl)-4-(phenylsulfonamido)benzamide and N-(2-bromophenyl)-4-(phenylsulfonamido)benzamide (compounds 34 and 37) displayed IC50 of 61.32 ± 7.21 and 42.64 ± 2.17 nM against equine plasma BuChE. Ortho-substituted derivatives were more active against BuChE. Further, the ortho-halogen and ortho-alkyl substituted derivatives were found to be most active among all with minimal AChE inhibition. The compounds were selective toward BuChE.

Keywords: Alzheimer disease, butyrylcholinesterase, machine learning, sulfonamides

Procedia PDF Downloads 119
1590 Polymer Nanostructures Based Catalytic Materials for Energy and Environmental Applications

Authors: S. Ghosh, L. Ramos, A. N. Kouamé, A.-L. Teillout, H. Remita

Abstract:

Catalytic materials have attracted continuous attention due to their promising applications in a variety of energy and environmental applications including clean energy, energy conversion and storage, purification and separation, degradation of pollutants and electrochemical reactions etc. With the advanced synthetic technologies, polymer nanostructures and nanocomposites can be directly synthesized through soft template mediated approach using swollen hexagonal mesophases and modulate the size, morphology, and structure of polymer nanostructures. As an alternative to conventional catalytic materials, one-dimensional PDPB polymer nanostructures shows high photocatalytic activity under visible light for the degradation of pollutants. These photocatalysts are very stable with cycling. Transmission electron microscopy (TEM), and AFM-IR characterizations reveal that the morphology and structure of the polymer nanostructures do not change after photocatalysis. These stable and cheap polymer nanofibers and metal polymer nanocomposites are easy to process and can be reused without appreciable loss of activity. The polymer nanocomposites formed via one pot chemical redox reaction with 3.4 nm Pd nanoparticles on poly(diphenylbutadiyne) (PDPB) nanofibers (30 nm). The reduction of Pd (II) ions is accompanied by oxidative polymerization leading to composites materials. Hybrid Pd/PDPB nanocomposites used as electrode materials for the electrocatalytic oxidation of ethanol without using support of proton exchange Nafion membrane. Hence, these conducting polymer nanofibers and nanocomposites offer the perspective of developing a new generation of efficient photocatalysts for environmental protection and in electrocatalysis for fuel cell applications.

Keywords: conducting polymer, swollen hexagonal mesophases, solar photocatalysis, electrocatalysis, water depollution

Procedia PDF Downloads 361
1589 Performance the SOFA and APACHEII Scoring System to Predicate the Mortality of the ICU Cases

Authors: Yu-Chuan Huang

Abstract:

Introduction: There is a higher mortality rate for unplanned transfer to intensive care units. It also needs a longer length of stay and makes the intensive care unit beds cannot be effectively used. It affects the immediate medical treatment of critically ill patients, resulting in a drop in the quality of medical care. Purpose: The purpose of this study was using SOFA and APACHEII score to analyze the mortality rate of the cases transferred from ED to ICU. According to the score that should be provide an appropriate care as early as possible. Methods: This study was a descriptive experimental design. The sample size was estimated at 220 to reach a power of 0.8 for detecting a medium effect size of 0.30, with a 0.05 significance level, using G-power. Considering an estimated follow-up loss, the required sample size was estimated as 242 participants. Data were calculated by medical system of SOFA and APACHEII score that cases transferred from ED to ICU in 2016. Results: There were 233 participants meet the study. The medical records showed 33 participants’ mortality. Age and sex with QSOFA , SOFA and sex with APACHEII showed p>0.05. Age with APCHHII in ED and ICU showed r=0.150, 0,268 (p < 0.001**). The score with mortality risk showed: ED QSOFA is r=0.235 (p < 0.001**), exp(B)=1.685(p = 0.007); ICU SOFA 0.78 (p < 0.001**), exp(B)=1.205(p < 0.001). APACHII in ED and ICU showed r= 0.253, 0.286 (p < 0.001**), exp(B) = 1.041,1.073(p = 0.017,0.001). For SOFA, a cutoff score of above 15 points was identified as a predictor of the 95% mortality risk. Conclusions: The SOFA and APACHE II were calculated based on initial laboratory data in the Emergency Department, and during the first 24 hours of ICU admission. In conclusion, the SOFA and APACHII score is significantly associated with mortality and strongly predicting mortality. Early predictors of morbidity and mortality, which we can according the predicting score, and provide patients with a detail assessment and proper care, thereby reducing mortality and length of stay.

Keywords: SOFA, APACHEII, mortality, ICU

Procedia PDF Downloads 130
1588 Effect of Species and Slaughtering Age on Quality Characteristics of Different Meat Cuts of Humped Cattle and Water Buffalo Bulls

Authors: Muhammad Kashif Yar, Muhammad Hayat Jaspal, Muawuz Ijaz, Zafar Hayat, Iftikhar Hussain Badar, Jamal Nasir

Abstract:

Meat quality characteristics such as ultimate pH (pHu), color, cooking loss and shear force of eight wholesale meat cuts of humped cattle (Bos indicus) and water buffalo (Bubalus bubalis) bulls at two age groups were evaluated. A total of 48 animals, 24 of each species and within species 12 from each 18 and 26 months age group were slaughtered. After 24h post-slaughter, eight meat cuts, i.e., tenderloin, sirloin, rump, cube roll, round, topside, silverside and blade were cut from the carcass. The pHu of tenderloin (5.65 vs 5.55), sirloin (5.67 vs 5.60), cube roll (5.68 vs 5.62) and blade (5.88 vs 5.72) was significantly higher (P<0.05) in buffalo than cattle. The tenderloin showed significantly higher (44.63 vs 42.23) and sirloin showed lower (P<0.05) mean L* value (42.28 vs 44.47) in cattle than buffalo whilst the mean L* value of the only tenderloin was affected by animal age. Species had a significant (P<0.05) effect on mean a*, b*, C, and h values of all meat cuts. The shear force of the majority of meat cuts, within species and age groups, varied considerably. The mean shear values of tenderloin, sirloin, cube roll and blade were higher (P<0.05) in buffalo than cattle. The shear values of rump, round, topside and silverside increased significantly (P<0.05) with animal age. In conclusion, primal cuts of cattle showed better meat quality especially tenderness than buffalo. Furthermore, calves should be raised at least up to 26 months of age to maximize profitability by providing better quality meat.

Keywords: buffalo, cattle, meat color, meat quality, slaughtering age, tenderness

Procedia PDF Downloads 120
1587 Integration of GIS with Remote Sensing and GPS for Disaster Mitigation

Authors: Sikander Nawaz Khan

Abstract:

Natural disasters like flood, earthquake, cyclone, volcanic eruption and others are causing immense losses to the property and lives every year. Current status and actual loss information of natural hazards can be determined and also prediction for next probable disasters can be made using different remote sensing and mapping technologies. Global Positioning System (GPS) calculates the exact position of damage. It can also communicate with wireless sensor nodes embedded in potentially dangerous places. GPS provide precise and accurate locations and other related information like speed, track, direction and distance of target object to emergency responders. Remote Sensing facilitates to map damages without having physical contact with target area. Now with the addition of more remote sensing satellites and other advancements, early warning system is used very efficiently. Remote sensing is being used both at local and global scale. High Resolution Satellite Imagery (HRSI), airborne remote sensing and space-borne remote sensing is playing vital role in disaster management. Early on Geographic Information System (GIS) was used to collect, arrange, and map the spatial information but now it has capability to analyze spatial data. This analytical ability of GIS is the main cause of its adaption by different emergency services providers like police and ambulance service. Full potential of these so called 3S technologies cannot be used in alone. Integration of GPS and other remote sensing techniques with GIS has pointed new horizons in modeling of earth science activities. Many remote sensing cases including Asian Ocean Tsunami in 2004, Mount Mangart landslides and Pakistan-India earthquake in 2005 are described in this paper.

Keywords: disaster mitigation, GIS, GPS, remote sensing

Procedia PDF Downloads 445
1586 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 240
1585 Physicochemical Properties and Thermal Inactivation of Polyphenol Oxidase of African Bush Mango (Irvingia Gabonensis) Fruit

Authors: Catherine Joke Adeseko

Abstract:

Enzymatic browning is an economically important disorder that degrades organoleptic properties and prevent the consumer from purchasing fresh fruit and vegetables. Prevention and control of enzymatic browning in fruit and its product is imperative. Therefore, this study sought to investigate the catalytic effect of polyphenol oxidase (PPO) in the adverse browning of African bush mango (Irvingia gabonensis) fruit peel and pulp. PPO was isolated and purified, and its physicochemical properties, such as the effect of pH with SDS, temperature, and thermodynamic studies, which invariably led to thermal inactivation of purified PPO at 80 °C, were evaluated. The pH and temperature optima of PPO were found at 7.0 and 50, respectively. There was a gradual increase in the activity of PPO as the pH increases. However, the enzyme exhibited a higher activity at neutral pH 7.0, while enzymatic inhibition was observed at acidic region, pH 2.0. The presence of SDS at pH 5.0 downward was found to inhibit the activity of PPO from the peel and pulp of I. gabonensis. The average value of enthalpy (ΔH), entropy (ΔS), and Gibbs free energy (ΔG) obtained at 20 min of incubation and temperature 30 – 80 °C were respectively 39.93 kJ.mol-1, 431.57 J.mol-1 .K-1 and -107.99 kJ.mol-1 for peel PPO, and 37.92 kJ.mol-1, -442.51J.mol-1.K-1, and -107.22 kJ.mol-1 for pulp PPO. Thermal inactivation of PPO from I. gabonensis exhibited a reduction in catalytic activity as the temperature and duration of heat inactivation increases using catechol, reflected by an increment in k value. The half-life of PPO (t1/2) decreases as the incubation temperature increases due to the instability of the enzyme at high temperatures and was higher in pulp than peel. Both D and Z values decrease with increase in temperature. The information from this study suggests processing parameters for controlling PPO in the potential industrial application of I. gabonensis fruit in order to prolong the shelf-life of this fruit for maximum utilization.

Keywords: enzymatic, browning, characterization, activity

Procedia PDF Downloads 62
1584 The Study of Climate Change Effects on the Performance of Thermal Power Plants in Iran

Authors: Masoud Soltani Hosseini, Fereshteh Rahmani, Mohammad Tajik Mansouri, Ali Zolghadr

Abstract:

Climate change is accompanied with ambient temperature increase and water accessibility limitation. The main objective of this paper is to investigate the effects of climate change on thermal power plants including gas turbines, steam and combined cycle power plants in Iran. For this purpose, the ambient temperature increase and water accessibility will be analyzed and their effects on power output and efficiency of thermal power plants will be determined. According to the results, the ambient temperature has high effect on steam power plants with indirect cooling system (Heller). The efficiency of this type of power plants decreases by 0.55 percent per 1oC ambient temperature increase. This amount is 0.52 and 0.2 percent for once-through and wet cooling systems, respectively. The decrease in power output covers a range of 0.2% to 0.65% for steam power plant with wet cooling system and gas turbines per 1oC air temperature increase. Based on the thermal power plants distribution in Iran and different scenarios of climate change, the total amount of power output decrease falls between 413 and 1661 MW due to ambient temperature increase. Another limitation incurred by climate change is water accessibility. In optimistic scenario, the power output of steam plants decreases by 1450 MW in dry and hot climate areas throughout next decades. The remaining scenarios indicate that the amount of decrease in power output would be by 4152 MW in highlands and cold climate. Therefore, it is necessary to consider appropriate solutions to overcome these limitations. Considering all the climate change effects together, the actual power output falls in range of 2465 and 7294 MW and efficiency loss covers the range of 0.12 to .56 % in different scenarios.

Keywords: climate, change, thermal, power plants

Procedia PDF Downloads 56
1583 Electrophoretic Deposition of p-Type Bi2Te3 for Thermoelectric Applications

Authors: Tahereh Talebi, Reza Ghomashchi, Pejman Talemi, Sima Aminorroaya

Abstract:

Electrophoretic deposition (EPD) of p-type Bi2Te3 material has been accomplished, and a high quality crack-free thick film has been achieved for thermoelectric (TE) applications. TE generators (TEG) can convert waste heat into electricity, which can potentially solve global warming problems. However, TEG is expensive due to the high cost of materials, as well as the complex and expensive manufacturing process. EPD is a simple and cost-effective method which has been used recently for advanced applications. In EPD, when a DC electric field is applied to the charged powder particles suspended in a suspension, they are attracted and deposited on the substrate with the opposite charge. In this study, it has been shown that it is possible to prepare a TE film using the EPD method and potentially achieve high TE properties at low cost. The relationship between the deposition weight and the EPD-related process parameters, such as applied voltage and time, has been investigated and a linear dependence has been observed, which is in good agreement with the theoretical principles of EPD. A stable EPD suspension of p-type Bi2Te3 was prepared in a mixture of acetone-ethanol with triethanolamine as a stabilizer. To achieve a high quality homogenous film on a copper substrate, the optimum voltage and time of the EPD process was investigated. The morphology and microstructures of the green deposited films have been investigated using a scanning electron microscope (SEM). The green Bi2Te3 films have shown good adhesion to the substrate. In summary, this study has shown that not only EPD of p-type Bi2Te3 material is possible, but its thick film is of high quality for TE applications.

Keywords: electrical conductivity, electrophoretic deposition, mechanical property, p-type Bi2Te3, Seebeck coefficient, thermoelectric materials, thick films

Procedia PDF Downloads 142