Search results for: optical power generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10262

Search results for: optical power generation

1592 Chinese Leaders Abroad: Case in the Netherlands

Authors: Li Lin, Hein Roelfsema

Abstract:

To achieve aggressive expansion goals, many Chinese companies are seeking resources and market around the world. To an increasing extent, Chinese enterprises recognized the Netherlands as their gateway to Europe Market. Yet, large cultural gaps (e.g. individualism/collectivism, power distance) may influence expat leaders’ influencing process, in turn affect intercultural teamwork. Lessons and suggestions from Chinese expat leaders could provide profound knowledge for managerial practice and future research. The current research focuses on the cultural difference between China and the Netherlands, along with leadership tactics for coping and handling differences occurring in the international business work. Exclusive 47 in-depth interviews with Chinese expat leaders were conducted. Within each interview, respondents were asked what were the main issues when working with Dutch employees, and what they believed as the keys to successful leadership in Dutch-Chinese cross-cultural workplaces. Consistent with previous research, the findings highlight the need to consider the cultural context within which leadership adapts. In addition, the findings indicated the importance of recognizing and applying the cultural advantages from which leadership originates. The results address observation ability as a crucial key for Chinese managers to lead Dutch/international teams. Moreover, setting a common goal help a leader to overcome the challenges due to cultural differences. Based on the analysis, we develop a process model to illustrate the dynamic mechanisms. Our study contributes to the better understanding of transference of management practices, and has important practical implications for managing Dutch employees.

Keywords: Chinese managers, Dutch employees, leadership, interviews

Procedia PDF Downloads 342
1591 A Fuzzy Hybrıd Decısıon Support System for Naval Base Place Selectıon in a Foreıgn Country

Authors: Latif Yanar, Muharrem Kaçan

Abstract:

In this study, an Analytic Hierarchy Process and Analytic Network Process Decision Support System (DSS) model for determination of a navy base place in another country is proposed together with a decision support software (DESTEC 1.0) developed using C Sharp programming language. The proposed software also has the ability of performing the fuzzy models (Fuzzy AHP and Fuzzy ANP) of the proposed DSS to cope with the ambiguous and linguistic nature of the model. The AHP and ANP model, for a decision support for selecting the best place among the alternatives, including the criteria and alternatives, is developed and solved by the experts from Turkish Navy and Turkish academicians related to international relations branches of the universities in Turkey. Also, the questionnaires used for weighting of the criteria and the alternatives are filled by these experts.Some of our alternatives are: economic and political stability of the third country, the effect of another super power in that country, historical relations, security in that country, social facilities in the city in which the base will be built, the transportation security and difficulty from a main city that have an airport to the city will have the base etc. Over 20 criteria like these are determined which are categorized in social, political, economic and military aspects. As a result all the criteria and three alternatives are evaluated by different people who have background and experience to weight the criteria and alternatives as it must be in AHP and ANP evaluation system. The alternatives got their degrees all between 0 – 1 and the total is 1. At the end the DSS advices one of the alternatives as the best one to the decision maker according to the developed model and the evaluations of the experts.

Keywords: analytic hierarchical process, analytic network process, fuzzy logic, naval base place selection, multiple criteria decision making

Procedia PDF Downloads 390
1590 The Determinants of Corporate Hedging Strategy

Authors: Ademola Ajibade

Abstract:

Previous studies have explored several rationales for hedging strategies, but the evidence provided by these studies remains ambiguous. Using a hand-collected dataset of 2460 observations of non-financial firms in eight African countries covering 2013-2022, this paper investigates the determinants and extent of corporate hedge use. In particular, this paper focuses on the link between country-specific conditions and the corporate hedging behaviour of firms. To our knowledge, this represents the first African studies investigating the association between country-specific factors and corporate hedging policy. The evidence based on both univariate and multivariate reveal that country-level corruption and government quality are important indicators of the decisions and extent of hedge use among African firms. However, the connection between country-specific factors as a rationale for corporate hedge use is stronger for firms located in highly corrupt countries. This suggest that firms located in corrupt countries are more motivated to hedge due to the large exposure they face. In addition, we test the risk management theories and observe that CEOs educational qualification and experience shape corporate hedge behaviour. We implement a lagged variables in a panel data setting to address endogeneity concern and implement an interaction term between governance indices and firm-specific variables to test for robustness. Generally, our findings reveal that institutional factors shape risk management decisions and have a predictive power in explaining corporate hedging strategy.

Keywords: corporate hedging, governance quality, corruption, derivatives

Procedia PDF Downloads 89
1589 Enzyme Immobilization: A Strategy to Overcome Enzyme Limitations and Expand Their Applications

Authors: Charline Monnier, Rudolf Andrys, Irene Castellino, Lucie Zemanova

Abstract:

Due to their inherent sustainability and compatibility with green chemistry principles, enzymes are attracting increasing attention for various applications like bioremediation or biocatalysis. These natural catalysts boast remarkable substrate specificity and operate under mild biological conditions. However, their intrinsic limitations, such as instability at high temperatures or in organic solvents, impede their wider applicability. Enzyme immobilization on supportive matrices emerges as a promising strategy to address these challenges. This approach not only facilitates enzyme reusability but also offers the potential to modulate their stability, activity, and selectivity. The present study investigates the immobilization and application of two distinct groups of hydrolases on supportive matrices: PETases, naturally capable of PolyEthylene Terephthalate (PET) degradation, and cholinesterases (ChEs), key enzymes in neurotransmitter regulation. All tested enzymes will be immobilized on porous and non-porous particles using both covalent and non-covalent methods. Additionally, the stability of PETases and cholinesterases will be explored, followed by exposure to denaturing conditions to assess their resilience under harsh conditions. Furthermore, due to the exceptional catalytic efficiency and selectivity, their biocatalytic efficiency will be tested using xenobiotic substrates, aiming to establish them as replacements for conventional chemical catalysts in environmentally friendly processes. By exploiting the power of enzyme immobilization, this research strives to unlock the full potential of these biocatalysts for sustainable and efficient technological advancements.

Keywords: biocatalysis, bioremediation, enzyme efficiency, enzyme immobilization, green chemistry

Procedia PDF Downloads 54
1588 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 235
1587 The Visual Side of Islamophobia: A Social-Semiotic Analysis

Authors: Carmen Aguilera-Carnerero

Abstract:

Islamophobia, the unfounded hostility towards Muslims and Islam, has been deeply studied in the last decades from different perspectives ranging from anthropology, sociology, media studies, and linguistics. In the past few years, we have witnessed how the birth of social media has transformed formerly passive audiences into an active group that not only receives and digests information but also creates and comments publicly on any event of their interest. In this way, average citizens now have been entitled with the power of becoming potential opinion leaders. This rise of social media in the last years gave way to a different way of Islamophobia, the so called ‘cyberIslamophobia’. Considerably less attention, however, has been given to the study of islamophobic images that accompany the texts in social media. This paper attempts to analyse a corpus of 300 images of islamophobic nature taken from social media (from Twitter and Facebook) from the years 2014-2017 to see: a) how hate speech is visually constructed, b) how cyberislamophobia is articulated through images and whether there are differences/similarities between the textual and the visual elements, c) the impact of those images in the audience and their reaction to it and d) whether visual cyberislamophobia has undergone any process of permeating popular culture (for example, through memes) and its real impact. To carry out this task, we have used Critical Discourse Analysis as the most suitable theoretical framework that analyses and criticizes the dominant discourses that affect inequality, injustice, and oppression. The analysis of images was studied according to the theoretical framework provided by the visual framing theory and the visual design grammar to conclude that memes are subtle but very powerful tools to spread Islamophobia and foster hate speech under the guise of humour within popular culture.

Keywords: cyberIslamophobia, visual grammar, social media, popular culture

Procedia PDF Downloads 166
1586 Visco - Plastic Transition and Transfer of Plastic Material with SGF in case of Linear Dry Friction Contact on Steel Surfaces

Authors: Lucian Capitanu, Virgil Florescu

Abstract:

Often for the laboratory studies, modeling of specific tribological processes raises special problems. One such problem is the modeling of some temperatures and extremely high contact pressures, allowing modeling of temperatures and pressures at which the injection or extrusion processing of thermoplastic materials takes place. Tribological problems occur mainly in thermoplastics materials reinforced with glass fibers. They produce an advanced wear to the barrels and screws of processing machines, in short time. Obtaining temperatures around 210 °C and higher, as well as pressures around 100 MPa is very difficult in the laboratory. This paper reports a simple and convenient solution to get these conditions, using friction sliding couples with linear contact, cylindrical liner plastic filled with glass fibers on plate steel samples, polished and super-finished. C120 steel, which is a steel for moulds and Rp3 steel, high speed steel for tools, were used. Obtaining the pressure was achieved by continuous request of the liner in rotational movement up to its elasticity limits, when the dry friction coefficient reaches or exceeds the hardness value of 0.5 HB. By dissipation of the power lost by friction on flat steel sample, are reached contact temperatures at the metal surface that reach and exceed 230 °C, being placed in the range temperature values of the injection. Contact pressures (in load and materials conditions used) ranging from 16.3-36.4 MPa were obtained depending on the plastic material used and the glass fibers content.

Keywords: plastics with glass fibers, dry friction, linear contact, contact temperature, contact pressure, experimental simulation

Procedia PDF Downloads 301
1585 Comparison between Conventional Bacterial and Algal-Bacterial Aerobic Granular Sludge Systems in the Treatment of Saline Wastewater

Authors: Philip Semaha, Zhongfang Lei, Ziwen Zhao, Sen Liu, Zhenya Zhang, Kazuya Shimizu

Abstract:

The increasing generation of saline wastewater through various industrial activities is becoming a global concern for activated sludge (AS) based biological treatment which is widely applied in wastewater treatment plants (WWTPs). As for the AS process, an increase in wastewater salinity has negative impact on its overall performance. The advent of conventional aerobic granular sludge (AGS) or bacterial AGS biotechnology has gained much attention because of its superior performance. The development of algal-bacterial AGS could enhance better nutrients removal, potentially reduce aeration cost through symbiotic algae-bacterial activity, and thus, can also reduce overall treatment cost. Nonetheless, the potential of salt stress to decrease biomass growth, microbial activity and nutrient removal exist. Up to the present, little information is available on saline wastewater treatment by algal-bacterial AGS. To the authors’ best knowledge, a comparison of the two AGS systems has not been done to evaluate nutrients removal capacity in the context of salinity increase. This study sought to figure out the impact of salinity on the algal-bacterial AGS system in comparison to bacterial AGS one, contributing to the application of AGS technology in the real world of saline wastewater treatment. In this study, the salt concentrations tested were 0 g/L, 1 g/L, 5 g/L, 10 g/L and 15 g/L of NaCl with 24-hr artificial illuminance of approximately 97.2 µmol m¯²s¯¹, and mature bacterial and algal-bacterial AGS were used for the operation of two identical sequencing batch reactors (SBRs) with a working volume of 0.9 L each, respectively. The results showed that salinity increase caused no apparent change in the color of bacterial AGS; while for algal-bacterial AGS, its color was progressively changed from green to dark green. A consequent increase in granule diameter and fluffiness was observed in the bacterial AGS reactor with the increase of salinity in comparison to a decrease in algal-bacterial AGS diameter. However, nitrite accumulation peaked from 1.0 mg/L and 0.4 mg/L at 1 g/L NaCl in the bacterial and algal-bacterial AGS systems, respectively to 9.8 mg/L in both systems when NaCl concentration varied from 5 g/L to 15 g/L. Almost no ammonia nitrogen was detected in the effluent except at 10 g/L NaCl concentration, where it averaged 4.2 mg/L and 2.4 mg/L, respectively, in the bacterial and algal-bacterial AGS systems. Nutrients removal in the algal-bacterial system was relatively higher than the bacterial AGS in terms of nitrogen and phosphorus removals. Nonetheless, the nutrient removal rate was almost 50% or lower. Results show that algal-bacterial AGS is more adaptable to salinity increase and could be more suitable for saline wastewater treatment. Optimization of operation conditions for algal-bacterial AGS system would be important to ensure its stably high efficiency in practice.

Keywords: algal-bacterial aerobic granular sludge, bacterial aerobic granular sludge, Nutrients removal, saline wastewater, sequencing batch reactor

Procedia PDF Downloads 145
1584 Effects of Paternity: A Comparative Study to Analyze the Organization's Support in the Psychological Development of Children in India and USA

Authors: Aayushi Dalal

Abstract:

It is the mother who bears the child in her womb for 9 months. It is typically rooted in the Indian culture that it is solely the responsibility of women to take care of the children and as a result the gender roles are stereotyped. Instead of a 50-50 partnership in parenting the child, it is hackneyed that men take the responsibility of the bread earner while women nurture the children by staying at home. Thus, mothers are considered to be more psychologically connected to the children than fathers. But the current society is observing role dilution of parents which can create a gap in understanding from the organization’s perspective. This is the basis of the study. The emergence of women into the job market has forever changed how society views the traditional roles of fathers and mothers. Feminism and financial power has reformed the classic parenting model. This has given rise to a more open and flexible society consequently emphasizing the father's importance in the emotional well being of the child while also being capable caretakers and disciplinarians. This study focuses on analyzing the comparative differences of the father's role in the psychological development of the child in India and USA while taking into consideration the organization’s support towards them. A sample size of 150 fathers- 75 from India and 75 from USA was selected and a structured survey was carried out which had several open ended as well as closed ended questions probing to the issue. It was made sure that the environmental factors had as minimal effect as possible on the subjects. The findings of this research would materialize a framework for fathers to understand the magnitude of their role in their child's upbringing. This would not only ameliorate the "father-child" relationship but also make organization more sympathetic towards their employees.

Keywords: paternity, child development, psychology, gender role, organization policy

Procedia PDF Downloads 217
1583 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 127
1582 Does Pakistan Stock Exchange Offer Diversification Benefits to Regional and International Investors: A Time-Frequency (Wavelets) Analysis

Authors: Syed Jawad Hussain Shahzad, Muhammad Zakaria, Mobeen Ur Rehman, Saniya Khaild

Abstract:

This study examines the co-movement between the Pakistan, Indian, S&P 500 and Nikkei 225 stock markets using weekly data from 1998 to 2013. The time-frequency relationship between the selected stock markets is conducted by using measures of continuous wavelet power spectrum, cross-wavelet transform and cross (squared) wavelet coherency. The empirical evidence suggests strong dependence between Pakistan and Indian stock markets. The co-movement of Pakistani index with U.S and Japanese, the developed markets, varies over time and frequency where the long-run relationship is dominant. The results of cross wavelet and wavelet coherence analysis indicate moderate covariance and correlation between stock indexes and the markets are in phase (i.e. cyclical in nature) over varying durations. Pakistan stock market was lagging during the entire period in relation to Indian stock market, corresponding to the 8~32 and then 64~256 weeks scale. Similar findings are evident for S&P 500 and Nikkei 225 indexes, however, the relationship occurs during the later period of study. All three wavelet indicators suggest strong evidence of higher co-movement during 2008-09 global financial crises. The empirical analysis reveals a strong evidence that the portfolio diversification benefits vary across frequencies and time. This analysis is unique and have several practical implications for regional and international investors while assigning the optimal weightage of different assets in portfolio formulation.

Keywords: co-movement, Pakistan stock exchange, S&P 500, Nikkei 225, wavelet analysis

Procedia PDF Downloads 356
1581 Salinity Reduction from Saharan Brackish Water by Fluoride Removal on Activated Natural Materials: A Comparative Study

Authors: Amina Ramadni, Safia Taleb, André Dératani

Abstract:

The present study presents, firstly, to characterize the physicochemical quality of brackish groundwater of the Terminal Complex (TC) from the region of Eloued-souf and to investigate the presence of fluoride, and secondly, to study the comparison of adsorbing power of three materials, such as (activated alumina AA, sodium clay SC and hydroxyapatite HAP) against the groundwater in the region of Eloued-souf. To do this, a sampling campaign over 16 wells and consumer taps was undertaken. The results show that the groundwater can be characterized by very high fluoride content and excessive mineralization that require in some cases, specific treatment before supply. The study of adsorption revealed removal efficiencies fluoride by three adsorbents, maximum adsorption is achieved after 45 minutes at 90%, 83.4% and 73.95%, and with an adsorbed fluoride content of 0.22 mg/L, 0.318 mg/L and 0.52 mg/L for AA, HAP and SC, respectively. The acidity of the medium significantly affects the removal fluoride. Results deducted from the adsorption isotherms also showed that the retention follows the Langmuir model. The adsorption tests by adsorbent materials show that the physicochemical characteristics of brackish water are changed after treatment. The adsorption mechanism is an exchange between the OH- ions and fluoride ions. Three materials are proving to be effective adsorbents for fluoride removal that could be developed into a viable technology to help reduce the salinity of the Saharan hyper-fluorinated waters. Finally, a comparison between the results obtained from the different adsorbents allowed us to conclude that the defluoridation by AA is the process of choice for many waters of the region of Eloued-souf, because it was shown to be a very interesting and promising technique.

Keywords: fluoride removal, hydrochemical characterization of groundwater, natural materials, nanofiltration

Procedia PDF Downloads 214
1580 Determination of Antioxidant Activity in Raphanus raphanistrum L.

Authors: Esma Hande Alıcı, Gülnur Arabacı

Abstract:

Antioxidants are compounds or systems that can safely interact with free radicals and terminate the chain reaction before vital molecules are damaged. The anti-oxidative effectiveness of these compounds depends on their chemical characteristics and physical location within a food (proximity to membrane phospholipids, emulsion interfaces, or in the aqueous phase). Antioxidants (e.g., flavonoids, phenolic acids, tannins, vitamin C, vitamin E) have diverse biological properties, such as antiinflammatory, anti-carcinogenic and anti-atherosclerotic effects, reduce the incidence of coronary diseases and contribute to the maintenance of gut health by the modulation of the gut microbial balance. Plants are excellent sources of antioxidants especially with their high content of phenolic compounds. Raphanus raphanistrum L., the wild radish, is a flowering plant in the family Brassicaceae. It grows in Asia and Mediterranean region. It has been introduced into most parts of the world. It spreads rapidly, and is often found growing on roadsides or in other places where the ground has been disturbed. It is an edible plant, in Turkey its fresh aerial parts are mostly consumed as a salad with olive oil and lemon juice after boiled. The leaves of the plant are also used as anti-rheumatic in traditional medicine. In this study, we determined the antioxidant capacity of two different solvent fractions (methanol and ethyl acetate) obtained from Raphanus raphanistrum L. plant leaves. Antioxidant capacity of the plant was introduced by using three different methods: DPPH radical scavenging activity, CUPRAC (Cupric Ion Reducing Antioxidant Capacity) activity and Reducing power activity.

Keywords: antioxidant activity, antioxidant capacity, Raphanis raphanistrum L., wild radish

Procedia PDF Downloads 275
1579 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna

Authors: Babatunde Olatujoye, Binbin Yang

Abstract:

Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.

Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband

Procedia PDF Downloads 18
1578 Comparison of Soil Test Extractants for Determination of Available Soil Phosphorus

Authors: Violina Angelova, Stefan Krustev

Abstract:

The aim of this work was to evaluate the effectiveness of different soil test extractants for the determination of available soil phosphorus in five internationally certified standard soils, sludge and clay (NCS DC 85104, NCS DC 85106, ISE 859, ISE 952, ISE 998). The certified samples were extracted with the following methods/extractants: CaCl₂, CaCl₂ and DTPA (CAT), double lactate (DL), ammonium lactate (AL), calcium acetate lactate (CAL), Olsen, Mehlich 3, Bray and Kurtz I, and Morgan, which are commonly used in soil testing laboratories. The phosphorus in soil extracts was measured colorimetrically using Spectroquant Pharo 100 spectrometer. The methods used in the study were evaluated according to the recovery of available phosphorus, facility of application and rapidity of performance. The relationships between methods are examined statistically. A good agreement of the results from different soil test was established for all certified samples. In general, the P values extracted by the nine extraction methods significantly correlated with each other. When grouping the soils according to pH, organic carbon content and clay content, weaker extraction methods showed analogous trends; also among the stronger extraction methods, common tendencies were found. Other factors influencing the extraction force of the different methods include soil: solution ratio, as well as the duration and power of shaking the samples. The mean extractable P in certified samples was found to be in the order of CaCl₂ < CAT < Morgan < Bray and Kurtz I < Olsen < CAL < DL < Mehlich 3 < AL. Although the nine methods extracted different amounts of P from the certified samples, values of P extracted by the different methods were strongly correlated among themselves. Acknowledgment: The financial support by the Bulgarian National Science Fund Projects DFNI Н04/9 and DFNI Н06/21 are greatly appreciated.

Keywords: available soil phosphorus, certified samples, determination, soil test extractants

Procedia PDF Downloads 150
1577 Toxicological Analysis of Some Plant Combinations Used for the Treatment of Hypertension by Lay People in Northern Kwazulu-Natal, South Africa

Authors: Mmbulaheni Ramulondi, Sandy Van Vuuren, Helene De Wet

Abstract:

The use of plant combinations to treat various medical conditions is not a new concept, and it is known that traditional people do not only rely on a single plant extract for efficacy but often combine various plant species for treatment. The knowledge of plant combinations is transferred from one generation to the other in the belief that combination therapy may enhance efficacy, reduce toxicity, decreases adverse effects, increase bioavailability and result in lower dosages. However, combination therapy may also be harmful when the interaction is antagonistic, since it may result in increasing toxicity. Although a fair amount of research has been done on the toxicity of medicinal plants, there is very little done on the toxicity of medicinal plants in combination. The aim of the study was to assess the toxicity potential of 19 plant combinations which have been documented as treatments of hypertension in northern KwaZulu-Natal by lay people. The aqueous extracts were assessed using two assays; the Brine shrimp assay (Artemia franciscana) and the Ames test (Mutagenicity). Only one plant combination (Aloe marlothii with Hypoxis hemerocallidea) in the current study has been previously assessed for toxicity. With the Brine shrimp assay, the plant combinations were tested in two concentrations (2 and 4 mg/ml), while for mutagenicity tests, they were tested at 5 mg/ml. The results showed that in the Brine shrimp assay, six combinations were toxic at 4 mg/ml. The combinations were Albertisia delagoensis with Senecio serratuloides (57%), Aloe marlothii with Catharanthus roseus (98%), Catharanthus roseus with Hypoxis hemerocallidea (66%), Catharanthus roseus with Musa acuminata (89%), Catharanthus roseus with Momordica balsamina (99%) and Aloe marlothii with Trichilia emetica and Hyphaene coriacea (50%). However when the concentration was reduced to 2 mg/ml, only three combinations were toxic which were Aloe marlothii with Catharanthus roseus (76%), Catharanthus roseus with Musa acuminata (66%) and Catharanthus roseus with Momordica balsamina (73%). For the mutagenicity assay, only the combinations between Catharanthus roseus with Hypoxis hemerocallidea and Catharanthus roseus with Momordica balsamina were mutagenic towards the Salmonella typhimurium strains TA98 and TA100. Most of the combinations which were toxic involve C. roseus which was also toxic when tested singularly. It is worth noting that C. roseus was one of the most frequently used plant species both to treat hypertension singularly and in combination and some of the individuals have been using this for the last 20 years. The mortality percentage of the Brine shrimp showed a significant correlation between dosage and toxicity thus toxicity was dosage dependant. A combination which is worth noting is the combination between A. delagoensis and S. serratuloides. Singularly these plants were non-toxic towards Brine shrimp, however their combination resulted in antagonism with the mortality rate of 57% at the total concentration of 4 mg/ml. Low toxicity was mostly observed, giving some validity to combined use, however the few combinations showing increased toxicity demonstrate the importance of analysing plant combinations.

Keywords: dosage, hypertension, plant combinations, toxicity

Procedia PDF Downloads 352
1576 Check Red Blood Cells Concentrations of a Blood Sample by Using Photoconductive Antenna

Authors: Ahmed Banda, Alaa Maghrabi, Aiman Fakieh

Abstract:

Terahertz (THz) range lies in the area between 0.1 to 10 THz. The process of generating and detecting THz can be done through different techniques. One of the most familiar techniques is done through a photoconductive antenna (PCA). The process of generating THz radiation at PCA includes applying a laser pump in femtosecond and DC voltage difference. However, photocurrent is generated at PCA, which its value is affected by different parameters (e.g., dielectric properties, DC voltage difference and incident power of laser pump). THz radiation is used for biomedical applications. However, different biomedical fields need new technologies to meet patients’ needs (e.g. blood-related conditions). In this work, a novel method to check the red blood cells (RBCs) concentration of a blood sample using PCA is presented. RBCs constitute 44% of total blood volume. RBCs contain Hemoglobin that transfers oxygen from lungs to body organs. Then it returns to the lungs carrying carbon dioxide, which the body then gets rid of in the process of exhalation. The configuration has been simulated and optimized using COMSOL Multiphysics. The differentiation of RBCs concentration affects its dielectric properties (e.g., the relative permittivity of RBCs in the blood sample). However, the effects of four blood samples (with different concentrations of RBCs) on photocurrent value have been tested. Photocurrent peak value and RBCs concentration are inversely proportional to each other due to the change of dielectric properties of RBCs. It was noticed that photocurrent peak value has dropped from 162.99 nA to 108.66 nA when RBCs concentration has risen from 0% to 100% of a blood sample. The optimization of this method helps to launch new products for diagnosing blood-related conditions (e.g., anemia and leukemia). The resultant electric field from DC components can not be used to count the RBCs of the blood sample.

Keywords: biomedical applications, photoconductive antenna, photocurrent, red blood cells, THz radiation

Procedia PDF Downloads 200
1575 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 364
1574 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 136
1573 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: chlorophenolics, effluent, electrochemical treatment, wastewater

Procedia PDF Downloads 386
1572 Improving Tower Grounding and Insulation Level vs. Line Surge Arresters for Protection of Subtransmission Lines

Authors: Navid Eghtedarpour, Mohammad Reza Hasani

Abstract:

Since renewable wind power plants are usually installed in mountain regions and high-level lands, they are often prone to lightning strikes and their hazardous effects. Although the transmission line is protected using guard wires in order to prevent the lightning surges to strike the phase conductors, the back-flashover may also occur due to tower footing resistance. A combination of back-flashover corrective methods, tower-footing resistance reduction, insulation level improvement, and line arrester installation, are analyzed in this paper for back-flashover rate reduction of a double-circuit 63 kV line in the south region of Fars province. The line crosses a mountain region in some sections with a moderate keraunic level, whereas tower-footing resistance is substantially high at some towers. Consequently, an exceptionally high back-flashover rate is recorded. A new method for insulation improvement is studied and employed in the current study. The method consists of using a composite-type creepage extender in the string. The effectiveness of this method for insulation improvement of the string is evaluated through the experimental test. Simulation results besides monitoring the one-year operation of the 63-kV line show that due to technical, practical, and economic restrictions in operated sub-transmission lines, a combination of corrective methods can lead to an effective solution for the protection of transmission lines against lightning.

Keywords: lightning protection, BF rate, grounding system, insulation level, line surge arrester

Procedia PDF Downloads 128
1571 Women in Violent Conflicts and the Challenges of Food Security in Northeast Nigeria: The Case of Boko Haram Insurgency

Authors: Grace Modupe Adebo, Ayodeji Oluwamuyiwa Adedapo

Abstract:

Women are key actors in ensuring food security in terms of food availability, food access, and food utilization in the developing economy, however, they suffer mostly during violent conflicts due to their feminist nature of rearing and caring for their children and relatives. The study was embarked upon to access the effects of violent conflicts posed by Boko Haram insurgency on women and food security in the Northeast of Nigeria. The study made use of secondary data. A time series data collected over a 22 years period were used. The data collected were subjected to descriptive statistics and t-test analysis. The findings of the study established a significant difference in food production (availability) before and after the Boko Haram insurgency at the 1% level of significance. The high level of Internally Displaced Person (IDP) with a high proportion of women depicts a very low level of food accessibility as the men and women has fled and uninhabited their place of abode for over a period of four to five years, thus diminishing their economic power, and the means of acquiring food which invariably endanger food stability and utilization. The study confirmed the abduction and changing roles of women as cooks, porters, spies, partners, and sex slaves to Boko Haram troop members, thus affecting their livelihoods and food security. The study recommends hands-on interventions by the governmental, non-governmental and international agencies to terminate the activities of Boko Haram in the area and restore the food production for enhanced food security.

Keywords: Boko Haram insurgency, food accessibility, food production, food utilization, women’s livelihoods

Procedia PDF Downloads 147
1570 Ethical Artificial Intelligence: An Exploratory Study of Guidelines

Authors: Ahmad Haidar

Abstract:

The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.

Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI

Procedia PDF Downloads 93
1569 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, immersed microelectronics, turbulent natural convection in enclosures

Procedia PDF Downloads 271
1568 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 146
1567 Combined Effect of Global Warming and Water Structures on Rivers’ Water Quality and Aquatic Life: Case Study of Esna Barrage on the Nile River in Egypt

Authors: Sherine A. El Baradei

Abstract:

Global warming and climatic change are very important topics that are being studied and investigated nowadays as they have lots of diverse impacts on mankind, water quality, aquatic life, wildlife,…etc. Also, many water and hydraulics structures like dams and barrages are being built every day to satisfy water consumption needs, irrigation purposes and power generating purposes. Each of global warming and water structures alone has diversity of impacts on water quality and aquatic life in rivers. This research is investigating the dual combined effect of both water structures and global warming on the water quality and aquatic life through mathematical modeling. A case study of the Esna Barrage on the Nile River in Egypt is being studied. This research study is taking into account the effects of both seasons; namely, winter and summer and their effects on air and hence water temperature of the Nile reach under study. To do so, the study is conducted on the last 23 years to investigate the effect of global warming and climatic change on the studied river water. The mathematical model is then combining the dual effect of the Esna barrage and the global warming on the water quality; as well as, on aquatic life of the Nile reach under study. From the results of the mathematical model, it could be concluded that the dual effect of water structures and global warming is very negative on the water quality and the aquatic life in rivers upstream those structures.

Keywords: aquatic life, barrages, climatic change, dissolved oxygen, global warming, river, water quality, water structures

Procedia PDF Downloads 366
1566 Personal Characteristics and Personality Traits as Predictors of Compassion Fatigue among Counselors from Dominican Schools in the Philippines

Authors: Neil Jordan M. Uy, Fe Pelilia V. Hernandez

Abstract:

A counselor is always regarded as a professional who embodies the willingness to help others through the process of counseling. He is knowledgeable and skillful of the different theories, tools, and techniques that are useful in aiding the client to cope with their dilemmas. The negative experiences of the clients that are shared during the counseling session can affect the professional counselor. Compassion fatigue, a professional impairment, is characterized by the decline of one’s productivity and the feeling of anxiety and stress brought about as the counselor empathizes, listens, and cares for others. This descriptive type of research aimed to explore variables that are predictors of compassion fatigue utilizing three research instruments; Demographic Profile Sheet, Professional Quality of Life Scale, and Neo-Pi-R. The 52 respondents of this study were counselors from the different Dominican schools in the Philippines. Generally, the counselors have low level of compassion fatigue across personal characteristics (age, gender, years of service, highest educational attainment, and professional status) and personality traits (extraversion, agreeableness, conscientiousness, openness, and neuroticism). ANOVA validated the findings of this that among the personal characteristics and personality traits, extraversion with f-value of 3.944 and p-value of 0.026, and conscientiousness, with f-value of 4.125 and p-value of 0.022 were found to have significant difference in the level of compassion fatigue. A very significant difference was observed with neuroticism with f-value of 6.878 and p-value 0.002. Among the personal characteristics and personal characteristics, only neuroticism was found to predict compassion fatigue. The computed r2 value of 0.204 using multiple regression analysis suggests that 20.4 percent of compassion fatigue can be predicted by neuroticism. The predicting power of neuroticism can be computed from the regression model Y=0.156x+26.464; where x is the number of neuroticism.

Keywords: big five personality traits, compassion fatigue, counselors, professional quality of life scale

Procedia PDF Downloads 377
1565 Problems Associated with Fibre-Reinforced Composites Ultrasonically-Assisted Drilling

Authors: Sikiru Oluwarotimi Ismail, Hom Nath Dhakal, Anish Roy, Dong Wang, Ivan Popov

Abstract:

The ultrasonically-assisted drilling (UAD) is a non-traditional technique which involves the superimposition of a high frequency and low amplitude vibration, usually greater than 18kHz and less than 20µm respectively, on a drill bit along the feed direction. UAD has remarkable advantages over the conventional drilling (CD), especially the high drilling-force reduction. Force reduction improves the quality of the drilled holes, reduces power consumption rate and cost of production. Nevertheless, in addition to the setbacks of UAD including expensiveness of set-up, unpredicted results and chipping effects, this paper presents the problems of insignificant force reduction and poor surface quality during UAD of hemp fibre-reinforced composites (HFRCs), a natural composite, with polycaprolactone (PCL) matrix. The experimental results obtained depict that HFRCs/PCL samples have more burnt chip-materials attached on the drilled holes during UAD than CD. This effect produced a very high surface roughness (Ra), up to 13µm. In a bid to reduce these challenges, different drilling parameters (feed rates and cutting speeds, frequencies and amplitudes for UAD), conditions (dry machining and airflow cooling) and diameters of drill bits (3mm and 6mm of high speed steel), as well as HFRCs/PCL samples of various fibre aspect ratios, including 0 (neat), 19, 26, 30 and 38 have been used. However, the setbacks still persisted. Evidently, the benefits of UAD are not obtainable for the drilling of the HFRCs/PCL laminates. These problems occurred due to the 60 °C melting temperature of PCL, quite lower than 56-90.2 °C and 265–290.8 °C composite-tool interface temperature during CD and UAD respectively.

Keywords: force reduction, hemp fibre-reinforced composites, ultrasonically-assisted drilling, surface quality

Procedia PDF Downloads 437
1564 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 145
1563 A Preliminary Exploration of the German Federal Government's Energy Crisis from the Processes of Decision Entrapment Behavior: The Case of the Nord Stream 1 and 2 Shutdowns

Authors: 李佳翰, CHIA-HAN LEE

Abstract:

Without energy, the economy would grind to a halt. Germany's prosperity and security depend on a reliable and affordable energy supply. In recent years, Germany's energy policy has undergone major changes. Due to the sharp turn in energy, Germany cannot extend the service of nuclear power plants and can only find a rapid transition energy source: natural gas for a limited time. This study attempts to use processes of decision entrapment behavior and document analysis to explain research questions. Through primary and secondary information such as official reports, parliamentary minutes, media interview records, and speech records, the author sorted out the important events experienced by the three coalition governments (Gerhard Schröder, Angela Merkel, and Olaf Scholz) and the relationship between Nord Stream 1 and Nord Stream 2 with primary and secondary sources. Also, compare it with the processes of decision entrapment behavior, which designed in this study, and divide it into four stages to explore its key elements one by one. In this regard, the following conclusions are drawn: First, from the perspective of processes of decision entrapment behavior, Merkel’s government firmly believes that she can overcome difficulties because of her past experience in crisis management capabilities. However, the outbreak of war between Ukraine and Russia was beyond Merkel's planning. Second, in the face of the crisis, the Scholz’s government increased the import of natural gas from other countries and began to import liquefied natural gas to make up for the energy gap of Russian natural gas.

Keywords: german research, nord stream gas pipeline, energy policy, processes of decision entrapment behavior

Procedia PDF Downloads 38