Search results for: solid works
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3932

Search results for: solid works

902 Centrifuge Modelling Approach on Sysmic Loading Analysis of Clay: A Geotechnical Study

Authors: Anthony Quansah, Tresor Ntaryamira, Shula Mushota

Abstract:

Models for geotechnical centrifuge testing are usually made from re-formed soil, allowing for comparisons with naturally occurring soil deposits. However, there is a fundamental omission in this process because the natural soil is deposited in layers creating a unique structure. Nonlinear dynamics of clay material deposit is an essential part of changing the attributes of ground movements when subjected to solid seismic loading, particularly when diverse intensification conduct of speeding up and relocation are considered. The paper portrays a review of axis shaking table tests and numerical recreations to explore the offshore clay deposits subjected to seismic loadings. These perceptions are accurately reenacted by DEEPSOIL with appropriate soil models and parameters reviewed from noteworthy centrifuge modeling researches. At that point, precise 1-D site reaction investigations are performed on both time and recurrence spaces. The outcomes uncover that for profound delicate clay is subjected to expansive quakes, noteworthy increasing speed lessening may happen close to the highest point of store because of soil nonlinearity and even neighborhood shear disappointment; nonetheless, huge enhancement of removal at low frequencies are normal in any case the forces of base movements, which proposes that for dislodging touchy seaward establishments and structures, such intensified low-recurrence relocation reaction will assume an essential part in seismic outline. This research shows centrifuge as a tool for creating a layered sample important for modelling true soil behaviour (such as permeability) which is not identical in all directions. Currently, there are limited methods for creating layered soil samples.

Keywords: seismic analysis, layered modeling, terotechnology, finite element modeling

Procedia PDF Downloads 140
901 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance

Authors: Mina Naeini, Thomas A. Adams II

Abstract:

Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.

Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs

Procedia PDF Downloads 116
900 Development of Strategy for Enhanced Production of Industrial Enzymes by Microscopic Fungi in Submerged Fermentation

Authors: Zhanara Suleimenova, Raushan Blieva, Aigerim Zhakipbekova, Inkar Tapenbayeva, Zhanar Narmuratova

Abstract:

Green processes are based on innovative technologies that do not negatively affect the environment. Industrial enzymes originated from biological systems can effectively contribute to sustainable development through being isolated from microorganisms which are fermented using primarily renewable resources. Many widespread microorganisms secrete a significant amount of biocatalysts into the environment, which greatly facilitates the task of their isolation and purification. The ability to control the enzyme production through the regulation of their biosynthesis and the selection of nutrient media and cultivation conditions allows not only to increase the yield of enzymes but also to obtain enzymes with certain properties. In this regard, large potentialities are embedded in immobilized cells. Enzyme production technology in a secreted active form enabling industrial application on an economically feasible scale has been developed. This method is based on the immobilization of enzyme producers on a solid career. Immobilizing has a range of advantages: decreasing the price of the final product, absence of foreign substances, controlled process of enzyme-genesis, the ability of various enzymes' simultaneous production, etc. Design of proposed equipment gives the opportunity to increase the activity of immobilized cell culture filtrate comparing to free cells, growing in periodic culture conditions. Such technology allows giving a 10-times raise in culture productivity, to prolong the process of fungi cultivation and periods of active culture liquid generation. Also, it gives the way to improve the quality of filtrates (to make them more clear) and exclude time-consuming processes of recharging fermentative vials, that require manual removing of mycelium.

Keywords: industrial enzymes, immobilization, submerged fermentation, microscopic fungi

Procedia PDF Downloads 126
899 Passively Q-Switched 914 nm Microchip Laser for LIDAR Systems

Authors: Marco Naegele, Klaus Stoppel, Thomas Dekorsy

Abstract:

Passively Q-switched microchip lasers enable the great potential for sophisticated LiDAR systems due to their compact overall system design, excellent beam quality, and scalable pulse energies. However, many near-infrared solid-state lasers show emitting wavelengths > 1000 nm, which are not compatible with state-of-the-art silicon detectors. Here we demonstrate a passively Q-switched microchip laser operating at 914 nm. The microchip laser consists of a 3 mm long Nd:YVO₄ crystal as a gain medium, while Cr⁴⁺:YAG with an initial transmission of 98% is used as a saturable absorber. Quasi-continuous pumping enables single pulse operation, and low duty cycles ensure low overall heat generation and power consumption. Thus, thermally induced instabilities are minimized, and operation without active cooling is possible while ambient temperature changes are compensated by adjustment of the pump laser current only. Single-emitter diode pumping at 808 nm leads to a compact overall system design and robust setup. Utilization of a microchip cavity approach ensures single-longitudinal mode operation with spectral bandwidths in the picometer regime and results in short laser pulses with pulse durations below 10 ns. Beam quality measurements reveal an almost diffraction-limited beam and enable conclusions concerning the thermal lens, which is essential to stabilize the plane-plane resonator. A 7% output coupler transmissivity is used to generate pulses with energies in the microjoule regime and peak powers of more than 600 W. Long-term pulse duration, pulse energy, central wavelength, and spectral bandwidth measurements emphasize the excellent system stability and facilitate the utilization of this laser in the context of a LiDAR system.

Keywords: diode-pumping, LiDAR system, microchip laser, Nd:YVO4 laser, passively Q-switched

Procedia PDF Downloads 115
898 Effect of Environmental Parameters on the Water Solubility of the Polycyclic Aromatic Hydrocarbons and Derivatives using Taguchi Experimental Design Methodology

Authors: Pranudda Pimsee, Caroline Sablayrolles, Pascale De Caro, Julien Guyomarch, Nicolas Lesage, Mireille Montréjaud-Vignoles

Abstract:

The MIGR’HYCAR research project was initiated to provide decisional tools for risks connected to oil spill drifts in continental waters. These tools aim to serve in the decision-making process once oil spill pollution occurs and/or as reference tools to study scenarios of potential impacts of pollutions on a given site. This paper focuses on the study of the distribution of polycyclic aromatic hydrocarbons (PAHs) and derivatives from oil spill in water as function of environmental parameters. Eight petroleum oils covering a representative range of commercially available products were tested. 41 Polycyclic Aromatic Hydrocarbons (PAHs) and derivate, among them 16 EPA priority pollutants were studied by dynamic tests at laboratory scale. The chemical profile of the water soluble fraction was different from the parent oil profile due to the various water solubility of oil components. Semi-volatile compounds (naphtalenes) constitute the major part of the water soluble fraction. A large variation in composition of the water soluble fraction was highlighted depending on oil type. Moreover, four environmental parameters (temperature, suspended solid quantity, salinity, and oil: water surface ratio) were investigated with the Taguchi experimental design methodology. The results showed that oils are divided into three groups: the solubility of Domestic fuel and Jet A1 presented a high sensitivity to parameters studied, meaning they must be taken into account. For gasoline (SP95-E10) and diesel fuel, a medium sensitivity to parameters was observed. In fact, the four others oils have shown low sensitivity to parameters studied. Finally, three parameters were found to be significant towards the water soluble fraction.

Keywords: mornitoring, PAHs, water soluble fraction, SBSE, Taguchi experimental design

Procedia PDF Downloads 303
897 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 98
896 Yellow Necklacepod and Shih-Balady: Possible Promising Sources Against Human Coronaviruses

Authors: Howaida I. Abd-Alla, Omnia Kutkat, Yassmin Moatasim, Magda T. Ibrahim, Marwa A. Mostafa, Mohamed GabAllah, Mounir M. El-Safty

Abstract:

Artemisia judaica (known shih-balady), Azadirachta indica and Sophora tomentosa (known yellow necklace pod) are members of available medicinal plants well-known for their traditional medical use in Egypt which suggests that they probably harbor broad-spectrum antiviral, immunostimulatory and anti-inflammatory functions. Their ethyl acetate-dichloromethane (1:1, v/v) extracts were evaluated for the potential anti-Middle East respiratory syndrome-related coronavirus (anti-MERS-CoV) activity. Their cytotoxic activity was tested in Vero-E6 cells using 3-(4,-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) method with minor modification. The plot of percentage cytotoxicity for each extract concentration has calculated the concentration which exhibited 50% cytotoxic concentration (TC50). A plaque reduction assay was employed using safe dose of extract to evaluate its effect on virus propagation. The highest inhibition percentage was recorded for the yellow necklace pod, followed by Shih-balady. The possible mode of action of virus inhibition was studied at three different levels viral replication, viral adsorption and virucidal activity. The necklace pod leaves have induced virucidal effects and direct effects on the replication of virus. Phytochemical investigation of the promising necklace pod led to the isolation and structure determination of nine compounds. The structure of each compound was determined by a variety of spectroscopic methods. Compounds 4-O-methyl sorbitol 1, 8-methoxy daidzin 6 and 6-methoxy apigenin-7-O-β-D-glucopyranoside 8 were isolated for the first time from the Sophora genus and the other six compounds were the first time that they were isolated from this species according to available works of literature. Generally, the highest anti-CoV 2 activity of S. tomentosa was associated with the crude ethanolic extract, indicating the possibility of synergy among the antiviral phytochemical constituents (1-9).

Keywords: coronavirus, MERS-CoV, mode of action, necklace pod, shih-balady

Procedia PDF Downloads 190
895 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 123
894 Rock-Bed Thermocline Storage: A Numerical Analysis of Granular Bed Behavior and Interaction with Storage Tank

Authors: Nahia H. Sassine, Frédéric-Victor Donzé, Arnaud Bruch, Barthélemy Harthong

Abstract:

Thermal Energy Storage (TES) systems are central elements of various types of power plants operated using renewable energy sources. Packed bed TES can be considered as a cost–effective solution in concentrated solar power plants (CSP). Such a device is made up of a tank filled with a granular bed through which heat-transfer fluid circulates. However, in such devices, the tank might be subjected to catastrophic failure induced by a mechanical phenomenon known as thermal ratcheting. Thermal stresses are accumulated during cycles of loading and unloading until the failure happens. For instance, when rocks are used as storage material, the tank wall expands more than the solid medium during charge process, a gap is created between the rocks and tank walls and the filler material settles down to fill it. During discharge, the tank contracts against the bed, resulting in thermal stresses that may exceed the wall tank yield stress and generate plastic deformation. This phenomenon is repeated over the cycles and the tank will be slowly ratcheted outward until it fails. This paper aims at studying the evolution of tank wall stresses over granular bed thermal cycles, taking into account both thermal and mechanical loads, with a numerical model based on the discrete element method (DEM). Simulations were performed to study two different thermal configurations: (i) the tank is heated homogeneously along its height or (ii) with a vertical gradient of temperature. Then, the resulting loading stresses applied on the tank are compared as well the response of the internal granular material. Besides the study of the influence of different thermal configurations on the storage tank response, other parameters are varied, such as the internal angle of friction of the granular material, the dispersion of particles diameters as well as the tank’s dimensions. Then, their influences on the kinematics of the granular bed submitted to thermal cycles are highlighted.

Keywords: discrete element method (DEM), thermal cycles, thermal energy storage, thermocline

Procedia PDF Downloads 390
893 Fault Prognostic and Prediction Based on the Importance Degree of Test Point

Authors: Junfeng Yan, Wenkui Hou

Abstract:

Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.

Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate

Procedia PDF Downloads 363
892 Numerical Modelling of 3-D Fracture Propagation and Damage Evolution of an Isotropic Heterogeneous Rock with a Pre-Existing Surface Flaw under Uniaxial Compression

Authors: S. Mondal, L. M. Olsen-Kettle, L. Gross

Abstract:

Fracture propagation and damage evolution are extremely important for many industrial applications including mining industry, composite materials, earthquake simulations, hydraulic fracturing. The influence of pre-existing flaws and rock heterogeneity on the processes and mechanisms of rock fracture has important ramifications in many mining and reservoir engineering applications. We simulate the damage evolution and fracture propagation in an isotropic sandstone specimen containing a pre-existing 3-D surface flaw in different configurations under uniaxial compression. We apply a damage model based on the unified strength theory and solve the solid deformation and damage evolution equations using the Finite Element Method (FEM) with tetrahedron elements on unstructured meshes through the simulation software, eScript. Unstructured meshes provide higher geometrical flexibility and allow a more accurate way to model the varying flaw depth, angle, and length through locally adapted FEM meshes. The heterogeneity of rock is considered by initializing material properties using a Weibull distribution sampled over a cubic grid. In our model, we introduce a length scale related to the rock heterogeneity which is independent of the mesh size. We investigate the effect of parameters including the heterogeneity of the elastic moduli and geometry of the single flaw in the stress strain response. The generation of three typical surface cracking patterns, called wing cracks, anti-wing cracks and far-field cracks were identified, and these depend on the geometry of the pre-existing surface flaw. This model results help to advance our understanding of fracture and damage growth in heterogeneous rock with the aim to develop fracture simulators for different industry applications.

Keywords: finite element method, heterogeneity, isotropic damage, uniaxial compression

Procedia PDF Downloads 199
891 Τhe Importance of Previous Examination Results, in Futural Differential Diagnostic Procedures and Especially in the Era of Covid-19

Authors: Angelis P. Barlampas

Abstract:

Purpose or Learning Objective It is well known that previous examinations play a major role in futural diagnosis, thus avoiding unnecessary new exams that cost in time and money both for the patient and the health system. A case is presented in which past patient’s results, in combination with the least needed new tests, give an easy final diagnosis. Methods or Background A middle aged man visited the emergency department complaining of hard controlled, persisting fever for the last few days. Laboratory tests showed an elevated number of white blood cells with neutrophil shift and abnormal CRP. The patient was admitted to hospital a month ago for continuing lungs symptomatology after a recent covid-19 infection. Results or Findings Computed tomography scanning showed a solid mass with spiculating margins in right lower lobe. After intravenous iodine contrast administration, there was mildly peripheral enhancement and eccentric non enhancing area. A pneumonic cancer was suspected. Comparison with the patient’s latest computed tomography revealed no mass in the area of interest but only signs of recent post covid-19 lung parenchyma abnormalities. Any new mass that appears in a month’s time span can not be a cancer but a benign lesion. It was obvious that an abscess was the most suitable explanation. The patient was admitted to hospital, and antibiotic therapy was given, with very good results. After a few days, the patient was afebrile and in good condition. Conclusion In this case , a PET scan or a biopsy was avoided, thanks to the patient’s medical history and the availability of previous examinations. It is worthy encouraging the patients to keep their medical records and organizing more efficiently the health system with the current technology of archiving the medical examinations, too.

Keywords: covid-19, chest ct, cancer, abscess, fever

Procedia PDF Downloads 46
890 Recycling of Spent Mo-Co Catalyst for the Recovery of Molybdenum Using Cyphos IL 104

Authors: Harshit Mahandra, Rashmi Singh, Bina Gupta

Abstract:

Molybdenum is widely used in thermocouples, anticathode of X-ray tubes and in the production of alloys of steels. Molybdenum compounds are extensively used as a catalyst in petroleum-refining industries for hydrodesulphurization. Activity of the catalysts decreases gradually with time and are dumped as hazardous waste due to contamination with toxic materials during the process. These spent catalysts can serve as a secondary source for metal recovery and help to sort out environmental and economical issues. In present study, extraction and separation of molybdenum from a Mo-Co spent catalyst leach liquor containing 0.870 g L⁻¹ Mo, 0.341 g L⁻¹ Co, 0.422 ×10⁻¹ g L⁻¹ Fe and 0.508 g L⁻¹ Al in 3 mol L⁻¹ HCl has been investigated using solvent extraction technique. The extracted molybdenum has been finally recovered as molybdenum trioxide. Leaching conditions used were- 3 mol L⁻¹ HCl, 90°C temperature, solid to liquid ratio (w/v) of 1.25% and reaction time of 60 minutes. 96.45% molybdenum was leached under these conditions. For the extraction of molybdenum from leach liquor, Cyphos IL 104 [trihexyl(tetradecyl)phosphonium bis(2,4,4-trimethylpentyl)phosphinate] in toluene was used as an extractant. Around 91% molybdenum was extracted with 0.02 mol L⁻¹ Cyphos IL 104, and 75% of molybdenum was stripped from the loaded organic phase with 2 mol L⁻¹ HNO₃ at A/O=1/1. McCabe Thiele diagrams were drawn to determine the number of stages required for the extraction and stripping of molybdenum. According to McCabe Thiele plots, two stages are required for both extraction and stripping of molybdenum at A/O=1/1 which were also confirmed by countercurrent simulation studies. Around 98% molybdenum was extracted in two countercurrent extraction stages with no co-extraction of cobalt and aluminum. Iron was removed from the loaded organic phase by scrubbing with 0.01 mol L⁻¹ HCl. Quantitative recovery of molybdenum is achieved in three countercurrent stripping stages at A/O=1/1. Trioxide of molybdenum was obtained from strip solution and was characterized by XRD, FE-SEM and EDX techniques. Molybdenum trioxide due to its distinctive electrochromic, thermochromic and photochromic properties is used as a smart material for sensors, lubricants, and Li-ion batteries. Molybdenum trioxide finds application in various processes such as methanol oxidation, metathesis, propane oxidation and in hydrodesulphurization. It can also be used as a precursor for the synthesis of MoS₂ and MoSe₂.

Keywords: Cyphos IL 104, molybdenum, spent Mo-Co catalyst, recovery

Procedia PDF Downloads 187
889 Determination of the Cooling Rate Dependency of High Entropy Alloys Using a High-Temperature Drop-on-Demand Droplet Generator

Authors: Saeedeh Imani Moqadam, Ilya Bobrov, Jérémy Epp, Nils Ellendt, Lutz Mädler

Abstract:

High entropy alloys (HEAs), having adjustable properties and enhanced stability compared with intermetallic compounds, are solid solution alloys that contain more than five principal elements with almost equal atomic percentage. The concept of producing such alloys pave the way for developing advanced materials with unique properties. However, the synthesis of such alloys may require advanced processes with high cooling rates depending on which alloy elements are used. In this study, the micro spheres of different diameters of HEAs were generated via a drop-on-demand droplet generator and subsequently solidified during free-fall in an argon atmosphere. Such droplet generators can generate individual droplets with high reproducibility regarding droplet diameter, trajectory and cooling while avoiding any interparticle momentum or thermal coupling. Metallography as well as X-ray diffraction investigations for each diameter of the generated metallic droplets where then carried out to obtain information about the microstructural state. To calculate the cooling rate of the droplets, a droplet cooling model was developed and validated using model alloys such as CuSn%6 and AlCu%4.5 for which a correlation of secondary dendrite arm spacing (SDAS) and cooling rate is well-known. Droplets were generated from these alloys and their SDAS was determined using quantitative metallography. The cooling rate was then determined from the SDAS and used to validate the cooling rates obtained from the droplet cooling model. The application of that model on the HEA then leads to the cooling rate dependency and hence to the identification of process windows for the synthesis of these alloys. These process windows were then compared with cooling rates obtained in processes such as powder production, spray forming, selective laser melting and casting to predict if a synthesis is possible with these processes.

Keywords: cooling rate, drop-on-demand, high entropy alloys, microstructure, single droplet generation, X-ray Diffractometry

Procedia PDF Downloads 193
888 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 62
887 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up

Procedia PDF Downloads 302
886 To Identify the Importance of Telemedicine in Diabetes and Its Impact on Hba1c

Authors: Sania Bashir

Abstract:

A promising approach to healthcare delivery, telemedicine makes use of communication technology to reach out to remote regions of the world, allowing for beneficial interactions between diabetic patients and healthcare professionals as well as the provision of affordable and easily accessible medical care. The emergence of contemporary care models, fueled by the pervasiveness of mobile devices, provides better information, offers low cost with the best possible outcomes, and is known as digital health. It involves the integration of collected data using software and apps, as well as low-cost, high-quality outcomes. The goal of this study is to assess how well telemedicine works for diabetic patients and how it impacts their HbA1c levels. A questionnaire-based survey of 300 diabetics included 150 patients in each of the groups receiving usual care and via telemedicine. A descriptive and observational study that lasted from September 2021 to May 2022 was conducted. HbA1c has been gathered for both categories every three months. A remote monitoring tool has been used to assess the efficacy of telemedicine and continuing therapy instead of the customary three monthly meetings like in-person consultations. The patients were (42.3) 18.3 years old on average. 128 men were outnumbered by 172 women (57.3% of the total). 200 patients (66.6%) have type 2 diabetes, compared to over 100 (33.3%) candidates for type 1. Despite the average baseline BMI being within normal ranges at 23.4 kg/m², the mean baseline HbA1c (9.45 1.20) indicates that glycemic treatment is not well-controlled at the time of registration. While patients who use telemedicine experienced a mean percentage change of 10.5, those who visit the clinic experienced a mean percentage change of 3.9. Changes in HbA1c are dependent on several factors, including improvements in BMI (61%) after 9 months of research and compliance with healthy lifestyle recommendations for diet and activity. More compliance was achieved by the telemedicine group. It is an undeniable reality that patient-physician communication is crucial for enhancing health outcomes and avoiding long-term issues. Telemedicine has shown its value in the management of diabetes and holds promise as a novel technique for improved clinical-patient communication in the twenty-first century.

Keywords: diabetes, digital health, mobile app, telemedicine

Procedia PDF Downloads 71
885 Managing Early Stakeholder Involvement at the Early Stages of a Building Project Life Cycle

Authors: Theophilus O. Odunlami, Hasan Haroglu, Nader Saleh-Matter

Abstract:

The challenges facing the construction industry are often worsened by the compounded nature of projects coupled with the complexity of key stakeholders involved at different stages of the project. Projects are planned to achieve outlined benefits in line with the business case; however, a lack of effective management of key stakeholders can result in unrealistic delivery aspirations, unnecessary re-works, and overruns. The aim of this study is to examine the early stages of a project lifecycle and investigate the stakeholder management and involvement processes and their impact on the successful delivery of the project. The research engaged with conventional construction organisations and project personnel and stakeholders on diverse projects, using a research strategy to analyse existing project case studies, narrative enquiries, interviews, and surveys using a combined qualitative, quantitative, and mixed method of analysis. Research findings have shown that the involvement of stakeholders at different levels during the early stages has pronounced effects on project delivery; it helps to forge synergy and promotes a clear understanding of individual responsibilities, strengths, and weaknesses. This has often fostered a positive sense of productive collaboration right through the early stages of the project. These research findings intend to contribute to the development of a process framework for stakeholder and project team involvement in the early stages of a project. This framework will align with the selection criteria for stakeholders, contractors, and resources, ultimately contributing to the successful completion of projects. The primary question addressed in this study is stakeholder involvement and management of the early stages of a building project life cycle impacts project delivery. Findings showed that early-stage stakeholder involvement and collaboration between project teams and contractors significantly contribute to project success. However, a strong and healthy communication strategy would be required to maintain the flow of value-added ideas among stakeholders at the early stages to benefit the project at the execution stage.

Keywords: early stages, project lifecycle, stakeholders, decision-making strategy, project framework

Procedia PDF Downloads 85
884 Simulation Based Analysis of Gear Dynamic Behavior in Presence of Multiple Cracks

Authors: Ahmed Saeed, Sadok Sassi, Mohammad Roshun

Abstract:

Gears are important components with a vital role in many rotating machines. One of the common gear failure causes is tooth fatigue crack; however, its early detection is still a challenging task. The objective of this study is to develop a numerical model that simulates the effect of teeth cracks on the resulting gears vibrations and permits consequently to perform an early fault detection. In contrast to other published papers, this work incorporates the possibility of multiple simultaneous cracks with different depths. As cracks alter significantly the stiffness of the tooth, finite element software is used to determine the stiffness variation with respect to the angular position, for different combinations of crack orientation and depth. A simplified six degrees of freedom nonlinear lumped parameter model of a one-stage spur gear system is proposed to study the vibration with and without cracks. The model developed for calculating the stiffness with the crack permitted to update the physical parameters of the second-degree-of-freedom equations of motions describing the vibration of the gearbox. The vibration simulation results of the gearbox were by obtained using Simulink/Matlab. The effect of one crack with different levels was studied thoroughly. The change in the mesh stiffness and the vibration response were found to be consistent with previously published works. In addition, various statistical time domain parameters were considered. They showed different degrees of sensitivity toward the crack depth. Multiple cracks were also introduced at different locations and the vibration response along with the statistical parameters were obtained again for a general case of degradation (increase in crack depth, crack number and crack locations). It was found that although some parameters increase in value as the deterioration level increases, they show almost no change or even decrease when the number of cracks increases. Therefore, the use of any statistical parameters could be misleading if not considered in an appropriate way.

Keywords: Spur gear, cracked tooth, numerical simulation, time-domain parameters

Procedia PDF Downloads 252
883 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 60
882 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 86
881 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity

Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois

Abstract:

With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.

Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation

Procedia PDF Downloads 298
880 Microstructure, Mechanical and Tribological Properties of (TiTaZrNb)Nx Medium Entropy Nitride Coatings: Influence of Nitrogen Content and Bias Voltage

Authors: Mario Alejandro Grisales, M. Daniela Chimá, Gilberto Bejarano Gaitán

Abstract:

High entropy alloys (HEA) and nitride (HEN) are currently very attractive to the automotive, aerospace, metalworking and materials forming manufacturing industry, among others, for exhibiting higher mechanical properties, wear resistance, and thermal stability than binary and ternary alloys. In this work medium-entropy coatings of TiTaZrNb and the nitrides of (TiTaZrNb)Nx were synthesized on to AISI 420 and M2 steel samples by the direct current magnetron sputtering technique. The influence of the bias voltage supplied to the substrate on the microstructure, chemical- and phase composition of the matrix coating was evaluated, and the effect of nitrogen flow on the microstructural, mechanical and tribological properties of the corresponding nitrides was studied. A change in the crystalline structure from BCC for TiTaZrNb coatings to FCC for (TiTaZrNb)Nx was observed, that is associated with the incorporation of nitrogen into the matrix and the consequent formation of a solid solution of (TiTaZrNb)Nx. An increase in hardness and residual stresses was observed with increasing bias voltage for TiTaZrNb, reaching 12.8 GPa for the coating deposited with a bias of -130V. In the case of (TiTaZrNb)Nx nitride, a greater hardness of 23 GPa is achieved for the coating deposited with a N2 flow of 12 sccm, which slightly drops to 21.7 GPa for that deposited with N2 flow of 15 sccm. The slight reduction in hardness could be associated with the precipitation of the TiN and ZrN phases that are formed at higher nitrogen flows. The specific wear rate of the deposited coatings ranged between 0.5xexp13 and 0.6xexp13 N/m2. The steel substrate exhibited an average hardness of 2.0 GPa and a specific wear rate of 203.2exp13 N/m2. Both the hardness and the specific wear rate of the synthesized nitride coatings were higher than that of the steel substrate, showing a protective effect of the steel against wear.

Keywords: medium entropy coatings, hard coatings, magnetron sputtering, tribology, wear resistance

Procedia PDF Downloads 54
879 Contribution of Hydrogen Peroxide in the Selective Aspect of Prostate Cancer Treatment by Cold Atmospheric Plasma

Authors: Maxime Moreau, Silvère Baron, Jean-Marc Lobaccaro, Karine Charlet, Sébastien Menecier, Frédéric Perisse

Abstract:

Cold Atmospheric Plasma (CAP) is an ionized gas generated at atmospheric pressure with the temperature of heavy particles (molecules, ions, atoms) close to the room temperature. Recent studies have shown that both in-vitro and in-vivo plasma exposition to many cancer cell lines are efficient to induce the apoptotic way of cell death. In some other works, normal cell lines seem to be less impacted by plasma than cancer cell lines. This is called selectivity of plasma. It is highly likely that the generated RNOS (Reactive Nitrogen Oxygen Species) in the plasma jet, but also in the medium, play a key-role in this selectivity. In this study, two CAP devices will be compared to electrical power, chemical species composition and their efficiency to kill cancer cells. A particular focus on the action of hydrogen peroxide will be made. The experiments will take place as described next for both devices: electrical and spectroscopic characterization for different voltages, plasma treatment of normal and cancer cells to compare the CAP efficiency between cell lines and to show that death is induced by an oxidative stress. To enlighten the importance of hydrogen peroxide, an inhibitor of H2O2 will be added in cell culture medium before treatment and a comparison will be made between the results of cell viability in this case and those from a simple plasma exposition. Besides, H2O2 production will be measured by only treating medium with plasma. Cell lines will also be exposed to different concentrations of hydrogen peroxide in order to characterize the cytotoxic threshold for cells and to make a comparison with the quantity of H2O2 produced by CAP devices. Finally, the activity of catalase for different cell lines will be quantified. This enzyme is an important antioxidant agent against hydrogen peroxide. A correlation between cells response to plasma exposition and this activity could be a strong argument in favor of the predominant role of H2O2 to explain the selectivity of plasma cancer treatment by cold atmospheric plasma.

Keywords: cold atmospheric plasma, hydrogen peroxide, prostate cancer, selectivity

Procedia PDF Downloads 131
878 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing

Procedia PDF Downloads 196
877 Accountability Mechanisms of Leaders and Its Impact on Performance and Value Creation: Comparative Analysis (France, Germany, United Kingdom)

Authors: Bahram Soltani, Louai Ghazieh

Abstract:

The responsibility has a big importance further to the financial crisis and the various pressures, which companies face their duties. The main objective of this study is to explain the variation of mechanisms of the responsibility of the manager in the company among the advanced capitalist economies. Then we study the impact of these mechanisms on the performance and the value creation in European companies. To reach our goal, we established a final sample composed on average of 284 French, British and German companies quoted in stock exchanges with 2272 annual reports examined during the period from 2005 to 2012. We examined at first the link of causalities between the determining-mechanisms bound to the company such as the characteristics of the board of directors, the composition of the shareholding and the ethics of the company on one side and the profitability of the company on the other side. The results show that the smooth running of the board of directors and its specialist committees are very important determinants of the responsibility of the managers who impact positively the performance and the value creation in the company. Furthermore, our results confirm that the presence of a solid ethical environment within the company will be effective to increase the probability that the managers realize ethical choices in the organizational decision-making. At the second time, we studied the impact of the determining mechanisms bound to the function and to the profile of manager to know its relational links, his remuneration, his training, his age and his experiences about the performance and the value creation in the company. Our results highlight the existence of a negative relation between the relational links of the manager, his very high remuneration and the general profitability of the company. This study is a contribution to the literature on the determining mechanisms of company director's responsibility (Accountability). It establishes an empirical and comparative analysis between three influential countries of Europe, to know France, the United Kingdom and Germany.

Keywords: leaders, company’s performance, accountability mechanisms, corporate governance, value creation of firm, financial crisis

Procedia PDF Downloads 355
876 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: filter media, hydraulic loading rate, residence time distribution, tracer

Procedia PDF Downloads 262
875 Sustainable Development of Adsorption Solar Cooling Machine

Authors: N. Allouache, W. Elgahri, A. Gahfif, M. Belmedani

Abstract:

Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are a good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs, such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber, that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space, and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.

Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system

Procedia PDF Downloads 57
874 Cognitivism in Classical Japanese Art and Literature: The Cognitive Value of Haiku and Zen Painting

Authors: Benito Garcia-Valero

Abstract:

This paper analyses the cognitivist value of traditional Japanese theories about aesthetics, art, and literature. These reflections were developed several centuries before actual Cognitive Studies, which started in the seventies of the last century. A comparative methodology is employed to shed light on the similarities between traditional Japanese conceptions about art and current cognitivist principles. The Japanese texts to be compared are Zeami’s treatise on noh art, Okura Toraaki’s Waranbe-gusa on kabuki theatre, and several Buddhist canonical texts about wisdom and knowledge, like the Prajnaparamitahrdaya or Heart Sutra. Japanese contemporary critical sources on these works are also referred, like Nishida Kitaro’s reflections on Zen painting or Ichikawa Hiroshi’s analysis of body/mind dualism in Japanese physical practices. Their ideas are compared with cognitivist authors like George Lakoff, Mark Johnson, Mark Turner and Margaret Freeman. This comparative review reveals the anticipatory ideas of Japanese thinking on body/mind interrelationship, which agrees with cognitivist criticism against dualism, since both elucidate the physical grounds acting upon the formation of concepts and schemes during the production of knowledge. It also highlights the necessity of recovering ancient Japanese treatises on cognition to continue enlightening current research on art and literature. The artistic examples used to illustrate the theory are Sesshu’s Zen paintings and Basho’s classical haiku poetry. Zen painting is an excellent field to demonstrate how monk artists conceived human perception and guessed the active role of beholders during the contemplation of art. On the other hand, some haikus by Matsuo Basho aim at factoring subjectivity out from artistic praxis, which constitutes an ideal of illumination that cannot be achieved using art, due to the embodied nature of perception; a constraint consciously explored by the poet himself. These ideas consolidate the conclusions drawn today by cognitivism about the interrelation between subject and object and the concept of intersubjectivity.

Keywords: cognitivism, dualism, haiku, Zen painting

Procedia PDF Downloads 123
873 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event

Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette

Abstract:

Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.

Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas

Procedia PDF Downloads 58