Search results for: breath monitoring using pressure sensors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7966

Search results for: breath monitoring using pressure sensors

916 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys

Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz

Abstract:

There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.

Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys

Procedia PDF Downloads 245
915 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers

Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri

Abstract:

Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.

Keywords: blood cell count, mandatory testing, occupational exposure, radiation

Procedia PDF Downloads 462
914 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather

Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour

Abstract:

The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati Tropical

Keywords: energyplus, multi-layer of PCM, phase changing materials, tropical area

Procedia PDF Downloads 95
913 Diagnostic Value of CT Scan in Acute Appendicitis

Authors: Maria Medeiros, Suren Surenthiran, Abitha Muralithar, Soushma Seeburuth, Mohammed Mohammed

Abstract:

Introduction: Appendicitis is the most common surgical emergency globally and can have devastating consequences. Diagnostic imaging in acute appendicitis has become increasingly common in aiding the diagnosis of acute appendicitis. Computerized tomography (CT) and ultrasound (US) are the most commonly used imaging modalities for diagnosing acute appendicitis. Pre-operative imaging has contributed to a reduction of negative appendicectomy rates from between 10-29% to 5%. Literature report CT scan has a diagnostic sensitivity of 94% in acute appendicitis. This clinical audit was conducted to establish if the CT scan's diagnostic yield for acute appendicitis matches the literature. CT scan has a high sensitivity and specificity for diagnosing acute appendicitis and its use can result in a lower negative appendicectomy rate. The aim of this study is to compare the pre-operative imaging findings from CT scans to the histopathology results post-operatively and establish the accuracy of CT scans in aiding the diagnosis of acute appendicitis. Methods: This was a retrospective study focusing on adult presentations to the general surgery department in a district general hospital in central London with an impression of acute appendicitis. We analyzed all patients from July 2022 to December 2022 who underwent a CT scan preceding appendicectomy. Pre-operative CT findings and post-operative histopathology findings were compared to establish the efficacy of CT scans in diagnosing acute appendicitis. Our results were also cross-referenced with pre-existing literature. Data was collected and anonymized using CERNER and analyzed in Microsoft Excel. Exclusion criteria: Children, age <16. Results: 65 patients had CT scans in which the report stated acute appendicitis. Of those 65 patients, 62 patients underwent diagnostic laparoscopies. 100% of patients who underwent an appendicectomy with a pre-operative CT scan showing acute appendicitis had acute appendicitis in histopathology analysis. 3 of the 65 patients who had a CT scan showing appendicitis received conservative treatment. Conclusion: CT scans positive for acute appendicitis had 100% sensitivity and a positive predictive value, which matches published research studies (sensitivity of 94%). The use of CT scans in the diagnostic work-up for acute appendicitis can be extremely helpful in a) confirming the diagnosis and b) reducing the rates of negative appendicectomies and consequently reducing unnecessary operative-associated risks for patients, reducing costs and reducing pressure on emergency theatre lists.

Keywords: acute apendicitis, CT scan, general surgery, imaging

Procedia PDF Downloads 94
912 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery

Procedia PDF Downloads 172
911 Assessing Local Authorities’ Interest in Addressing Urban Challenges through Nature Based Solutions in Romania

Authors: Athanasios A. Gavrilidis, Mihai R. Nita, Larissa N. Stoia, Diana A. Onose

Abstract:

Contemporary global environmental challenges must be primarily addressed at local levels. Cities are under continuous pressure as they must ensure high quality of life levels for their citizens and at the same time to adapt and address specific environmental issues. Innovative solutions using natural features or mimicking natural systems are endorsed by the scientific community as efficient approaches for both mitigating climate change effects and the decrease of environmental quality and for maintaining high standards of living for urban dwellers. The aim of this study was to assess whether Romanian cities’ authorities are considering nature-based innovation as solutions for their planning, management, and environmental issues. Data were gathered by applying 140 questionnaires to urban authorities throughout the country. The questionnaire was designed for assessinglocal policy makers’ perspective over the efficiency of nature-based innovations as a tool to address specific challenges. It also focused on extracting data about financing sources and challenges they must overcome for adopting nature-based approaches. The gather results from the municipalities participating in our study were statistically processed, and they revealed that Romanian city managers acknowledge the benefits of nature-based innovations, but investments in this sector are not on top of their priorities. More than 90% of the selected cities have agreed that in the last 10 years, their major concern was to expand the grey infrastructure (roads and public amenities) using traditional approaches. When asked how they would react if faced with different socio-economic and environmental challenges, local urban managers indicated investments nature-based solutions as a priority only in case of biodiversity loss and extreme weather, while for other 14 proposed scenarios, they would embrace the business-as-usual approach. Our study indicates that while new concepts of sustainable urban planning emerge within the scientific community, local authorities need more time to understand and implement them. Without the proper knowledge, personnel, policies, or dedicated budgets, local administrators will not embrace nature-based innovations as solutions for their challenges.

Keywords: nature based innovations, perception analysis, policy making, urban planning

Procedia PDF Downloads 175
910 Integrated Management System Applied in Dismantling and Waste Management of the Primary Cooling System from the VVR-S Nuclear Reactor Magurele, Bucharest

Authors: Radu Deju, Carmen Mustata

Abstract:

The VVR-S nuclear research reactor owned by Horia Hubulei National Institute of Physics and Nuclear Engineering (IFIN-HH) was designed for research and radioisotope production, being permanently shut-down in 2002, after 40 years of operation. All amount of the nuclear spent fuel S-36 and EK-10 type was returned to Russian Federation (first in 2009 and last in 2012), and the radioactive waste resulted from the reprocessing of it will remain permanently in the Russian Federation. The decommissioning strategy chosen is immediate dismantling. At this moment, the radionuclides with half-life shorter than 1 year have a minor contribution to the contamination of materials and equipment used in reactor department. The decommissioning of the reactor has started in 2010 and is planned to be finalized in 2020, being the first nuclear research reactor that has started the decommissioning project from the South-East of Europe. The management system applied in the decommissioning of the VVR-S research reactor integrates all common elements of management: nuclear safety, occupational health and safety, environment, quality- compliance with the requirements for decommissioning activities, physical protection and economic elements. This paper presents the application of integrated management system in decommissioning of systems, structures, equipment and components (SSEC) from pumps room, including the management of the resulted radioactive waste. The primary cooling system of this type of reactor includes circulation pumps, heat exchangers, degasser, filter ion exchangers, piping connection, drainage system and radioactive leaks. All the decommissioning activities of primary circuit were performed in stage 2 (year 2014), and they were developed and recorded according to the applicable documents, within the requirements of the Regulatory Body Licenses. In the presentation there will be emphasized how the integrated management system provisions are applied in the dismantling of the primary cooling system, for elaboration, approval, application of necessary documentation, records keeping before, during and after the dismantling activities. Radiation protection and economics are the key factors for the selection of the proper technology. Dedicated and advanced technologies were chosen to perform specific tasks. Safety aspects have been taken into consideration. Resource constraints have also been an important issue considered in defining the decommissioning strategy. Important aspects like radiological monitoring of the personnel and areas, decontamination, waste management and final characterization of the released site are demonstrated and documented.

Keywords: decommissioning, integrated management system, nuclear reactor, waste management

Procedia PDF Downloads 291
909 CeO₂-Decorated Graphene-coated Nickel Foam with NiCo Layered Double Hydroxide for Efficient Hydrogen Evolution Reaction

Authors: Renzhi Qi, Zhaoping Zhong

Abstract:

Under the dual pressure of the global energy crisis and environmental pollution, avoiding the consumption of non-renewable fossil fuels based on carbon as the energy carrier and developing and utilizing non-carbon energy carriers are the basic requirements for the future new energy economy. Electrocatalyst for water splitting plays an important role in building sustainable and environmentally friendly energy conversion. The oxygen evolution reaction (OER) is essentially limited by the slow kinetics of multi-step proton-electron transfer, which limits the efficiency and cost of water splitting. In this work, CeO₂@NiCo-NRGO/NF hybrid materials were prepared using nickel foam (NF) and nitrogen-doped reduced graphene oxide (NRGO) as conductive substrates by multi-step hydrothermal method and were used as highly efficient catalysts for OER. The well-connected nanosheet array forms a three-dimensional (3D) network on the substrate, providing a large electrochemical surface area with abundant catalytic active sites. The doping of CeO₂ in NiCo-NRGO/NF electrocatalysts promotes the dispersion of substances and its synergistic effect in promoting the activation of reactants, which is crucial for improving its catalytic performance against OER. The results indicate that CeO₂@NiCo-NRGO/NF only requires a lower overpotential of 250 mV to drive the current density of 10 mA cm-2 for an OER reaction of 1 M KOH, and exhibits excellent stability at this current density for more than 10 hours. The double layer capacitance (Cdl) values show that CeO₂@NiCo-NRGO/NF significantly affects the interfacial conductivity and electrochemically active surface area. The hybrid structure could promote the catalytic performance of oxygen evolution reaction, such as low initial potential, high electrical activity, and excellent long-term durability. The strategy for improving the catalytic activity of NiCo-LDH can be used to develop a variety of other electrocatalysts for water splitting.

Keywords: CeO₂, reduced graphene oxide, NiCo-layered double hydroxide, oxygen evolution reaction

Procedia PDF Downloads 83
908 Application of Mesenchymal Stem Cells in Diabetic Therapy

Authors: K. J. Keerthi, Vasundhara Kamineni, A. Ravi Shanker, T. Rammurthy, A. Vijaya Lakshmi, Q. Hasan

Abstract:

Pancreatic β-cells are the predominant insulin-producing cell types within the Islets of Langerhans and insulin is the primary hormone which regulates carbohydrate and fat metabolism. Apoptosis of β-cells or insufficient insulin production leads to Diabetes Mellitus (DM). Current therapy for diabetes includes either medical management or insulin replacement and regular monitoring. Replacement of β- cells is an attractive treatment option for both Type-1 and Type-2 DM in view of the recent paper which indicates that β-cells apoptosis is the common underlying cause for both the Types of DM. With the development of Edmonton protocol, pancreatic β-cells allo-transplantation became possible, but this is still not considered as standard of care due to subsequent requirement of lifelong immunosuppression and the scarcity of suitable healthy organs to retrieve pancreatic β-cell. Fetal pancreatic cells from abortuses were developed as a possible therapeutic option for Diabetes, however, this posed several ethical issues. Hence, in the present study Mesenchymal stem cells (MSCs) were differentiated into insulin producing cells which were isolated from Human Umbilical cord (HUC) tissue. MSCs have already made their mark in the growing field of regenerative medicine, and their therapeutic worth has already been validated for a number of conditions. HUC samples were collected with prior informed consent as approved by the Institutional ethical committee. HUC (n=26) were processed using a combination of both mechanical and enzymatic (collagenase-II, 100 U/ml, Gibco ) methods to obtain MSCs which were cultured in-vitro in L-DMEM (Low glucose Dulbecco's Modified Eagle's Medium, Sigma, 4.5 mM glucose/L), 10% FBS in 5% CO2 incubator at 37°C. After reaching 80-90% confluency, MSCs were characterized with Flowcytometry and Immunocytochemistry for specific cell surface antigens. Cells expressed CD90+, CD73+, CD105+, CD34-, CD45-, HLA-DR-/Low and Vimentin+. These cells were differentiated to β-cells by using H-DMEM (High glucose Dulbecco's Modified Eagle's Medium,25 mM glucose/L, Gibco), β-Mercaptoethanol (0.1mM, Hi-Media), basic Fibroblast growth factor (10 µg /L,Gibco), and Nicotinamide (10 mmol/L, Hi-Media). Pancreatic β-cells were confirmed by positive Dithizone staining and were found to be functionally active as they released 8 IU/ml insulin on glucose stimulation. Isolating MSCs from usually discarded, abundantly available HUC tissue, expanding and differentiating to β-cells may be the most feasible cell therapy option for the millions of people suffering from DM globally.

Keywords: diabetes mellitus, human umbilical cord, mesenchymal stem cells, differentiation

Procedia PDF Downloads 260
907 Nanofiltration Membranes with Deposyted Polyelectrolytes: Caracterisation and Antifouling Potential

Authors: Viktor Kochkodan

Abstract:

The main problem arising upon water treatment and desalination using pressure driven membrane processes such as microfiltration, ultrafiltration, nanofiltration and reverse osmosis is membrane fouling that seriously hampers the application of the membrane technologies. One of the main approaches to mitigate membrane fouling is to minimize adhesion interactions between a foulant and a membrane and the surface coating of the membranes with polyelectrolytes seems to be a simple and flexible technique to improve the membrane fouling resistance. In this study composite polyamide membranes NF-90, NF-270, and BW-30 were modified using electrostatic deposition of polyelectrolyte multilayers made from various polycationic and polyanionic polymers of different molecular weights. Different anionic polyelectrolytes such as: poly(sodium 4-styrene sulfonate), poly(vinyl sulfonic acid, sodium salt), poly(4-styrene sulfonic acid-co-maleic acid) sodium salt, poly(acrylic acid) sodium salt (PA) and cationic polyelectrolytes such as poly(diallyldimethylammonium chloride), poly(ethylenimine) and poly(hexamethylene biguanide were used for membrane modification. An effect of deposition time and a number of polyelectrolyte layers on the membrane modification has been evaluated. It was found that degree of membrane modification depends on chemical nature and molecular weight of polyelectrolytes used. The surface morphology of the prepared composite membranes was studied using atomic force microscopy. It was shown that the surface membrane roughness decreases significantly as a number of the polyelectrolyte layers on the membrane surface increases. This smoothening of the membrane surface might contribute to the reduction of membrane fouling as lower roughness most often associated with a decrease in surface fouling. Zeta potentials and water contact angles on the membrane surface before and after modification have also been evaluated to provide addition information regarding membrane fouling issues. It was shown that the surface charge of the membranes modified with polyelectrolytes could be switched between positive and negative after coating with a cationic or an anionic polyelectrolyte. On the other hand, the water contact angle was strongly affected when the outermost polyelectrolyte layer was changed. Finally, a distinct difference in the performance of the noncoated membranes and the polyelectrolyte modified membranes was found during treatment of seawater in the non-continuous regime. A possible mechanism of the higher fouling resistance of the modified membranes has been discussed.

Keywords: contact angle, membrane fouling, polyelectrolytes, surface modification

Procedia PDF Downloads 251
906 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods

Authors: Mohammad Arabi

Abstract:

The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.

Keywords: electric motor, fault detection, frequency features, temporal features

Procedia PDF Downloads 52
905 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling

Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak

Abstract:

Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.

Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling

Procedia PDF Downloads 302
904 Adult Learners’ Code-Switching in the EFL Classroom: An Analysis of Frequency and Type of Code-Switching

Authors: Elizabeth Patricia Beck

Abstract:

Stepping into various English as foreign language classrooms, one will see some fundamental similarities. There will likely be groups of students working collaboratively, possibly sitting at tables together. They will be using a set coursebook or photocopies of materials developed by publishers or the teacher. The teacher will be carefully monitoring students’ behaviour and progress. The teacher will also likely be insisting that the students only speak English together, possibly having implemented a complex penalty and award systems to encourage this. This is communicative language teaching and it is commonly how foreign languages are taught around the world. Recently, there has been much interest in the codeswitching behaviour of learners in foreign or second language classrooms. It is a significant topic as it relates to second language acquisition theory, language teaching training and policy, and student expectations and classroom practice. Generally in an English as a foreign language context, an ‘English Only’ policy is the norm. This is based on historical factors, socio-political influence and theories surrounding language learning. The trend, however, is shifting and, based on these same factors, a re-examination of language use in the foreign language classroom is taking place. This paper reports the findings of an examination into the codeswitching behaviour of learners with a shared native language in an English classroom. Specifically, it addresses the question of classroom code-switching by adult learners in the EFL classroom during student-to-student, spoken interaction. Three generic categories of code switching are proposed based on published research and classroom practice. Italian adult learners at three levels were observed and patterns of language use were identified, recorded and analysed using the proposed categories. After observations were completed, a questionnaire was distributed to the students focussing on attitudes and opinions around language choice in the EFL classroom, specifically, the usefulness of L1 for specific functions in the classroom. The paper then investigates the relationship between learners’ foreign language proficiency and the frequency and type of code-switching that they engaged in, and the relationship between learners’ attitudes to classroom code-switching and their behaviour. Results show that code switching patterns underwent changes as the students’ level of English language proficiency improved, and that students’ attitudes towards code-switching generally correlated with their behaviour with some exceptions, however. Finally, the discussion focusses on the details of the language produced in observation, possible influencing factors that may affect the frequency and type of code switching that took place, and additional influencing factors that may affect students’ attitudes towards code switching in the foreign language classroom. An evaluation of the limitations of this study is offered and some suggestions are made for future research in this field of study.

Keywords: code-switching, EFL, second language aquisition, adult learners

Procedia PDF Downloads 277
903 Investigating the Aerosol Load of Eastern Mediterranean Basin with Sentinel-5p Satellite

Authors: Deniz Yurtoğlu

Abstract:

Aerosols directly affect the radiative balance of the earth by absorbing and/or scattering the sun rays reaching the atmosphere and indirectly affect the balance by acting as a nucleus in cloud formation. The composition, physical, and chemical properties of aerosols vary depending on their sources and the time spent in the atmosphere. The Eastern Mediterranean Basin has a high aerosol load that is formed from different sources; such as anthropogenic activities, desert dust outbreaks, and the spray of sea salt; and the area is subjected to atmospheric transport from other locations on the earth. This region, which includes the deserts of Africa, the Middle East, and the Mediterranean sea, is one of the most affected areas by climate change due to its location and the chemistry of the atmosphere. This study aims to investigate the spatiotemporal deviation of aerosol load in the Eastern Mediterranean Basin between the years 2018-2022 with the help of a new pioneer satellite of ESA (European Space Agency), Sentinel-5P. The TROPOMI (The TROPOspheric Monitoring Instrument) traveling on this low-Earth orbiting satellite is a UV (Ultraviolet)-sensing spectrometer with a resolution of 5.5 km x 3.5 km, which can make measurements even in a cloud-covered atmosphere. By using Absorbing Aerosol Index data produced by this spectrometer and special scripts written in Python language that transforms this data into images, it was seen that the majority of the aerosol load in the Eastern Mediterranean Basin is sourced from desert dust and anthropogenic activities. After retrieving the daily data, which was separated from the NaN values, seasonal analyses match with the normal aerosol variations expected, which are high in warm seasons and lower in cold seasons. Monthly analyses showed that in four years, there was an increase in the amount of Absorbing Aerosol Index in spring and winter by 92.27% (2019-2021) and 39.81% (2019-2022), respectively. On the other hand, in the summer and autumn seasons, a decrease has been observed by 20.99% (2018-2021) and 0.94% (2018-2021), respectively. The overall variation of the mean absorbing aerosol index from TROPOMI between April 2018 to April 2022 reflects a decrease of 115.87% by annual mean from 0.228 to -0.036. However, when the data is analyzed by the annual mean values of the years which have the data from January to December, meaning from 2019 to 2021, there was an increase of 57.82% increase (0.108-0.171). This result can be interpreted as the effect of climate change on the aerosol load and also, more specifically, the effect of forest fires that happened in the summer months of 2021.

Keywords: aerosols, eastern mediterranean basin, sentinel-5p, tropomi, aerosol index, remote sensing

Procedia PDF Downloads 68
902 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 181
901 Effect of Water Addition on Catalytic Activity for CO2 Purification from Oxyfuel Combustion

Authors: Joudia Akil, Stephane Siffert, Laurence Pirault-Roy, Renaud Cousin, Christophe Poupin

Abstract:

Oxyfuel combustion is a promising method that enables to obtain a CO2 rich stream, with water vapor ( ̴10%), unburned components such as CO and NO, which must be cleaned before the use of CO2. Our objective is then the final treatment of CO and NO by catalysis. Three-way catalysts are well-developed material for simultaneous conversion of NO, CO and hydrocarbons. Pt and/or Rh ensure a quasi-complete removal of NOx, CO and HC and there is also a growing interest in partly replacing Pt with less-expensive Pd. The use of alumina and ceria as support ensures, respectively, the stabilization of such species in active state and discharging or storing oxygen to control the oxidation of CO and HC and the reduction of NOx. In this work, we will compare different metals (Pd, Rh and Pt) supported on Al2O3 and CeO2, for CO2 purification from oxyfuel combustion. The catalyst must reduce NO by CO in an oxidizing environment, in the presence of CO2 rich stream and resistant to water. In this study, Al2O3 and CeO2 were used as support materials of the catalysts. 1wt% M/Support where M = Pd, Rh or Pt catalysts were obtained by wet impregnation on supports with a precursor of palladium [Pd(acac)2], rhodium [Rh(NO3)3] and platinum [Pt(NO2)2(NO3)2]. Materials were characterized by BET surface area, H2 chemisorption, and TEM. Catalytic activity was evaluated in CO2 purification which is carried out in a fixed-bed flow reactor containing 150 mg of catalyst at atmospheric pressure. The flow of the reactant gases is composed of: 20% CO2, 10% O2, 0.5% CO, 0.02% NO and 8.2% H2O (He as eluent gas) with a total flow of 200 mL.min−1, with same GHSV (2.24x104 h-1). The catalytic performances of the samples were investigated with and without water. It shows that the total oxidation of CO occurred over the different materials. This study evidenced an important effect of the nature of the metals, supports and the presence or absence of H2O during the reduction of NO by CO in oxyfuel combustions conditions. Rh based catalysts show that the addition of water has a very positive influence especially on the Rh catalyst on CeO2. Pt based catalysts keep a good activity despite the addition of water on the both supports studied. For the NO reduction, addition of water act as a poison with Pd catalysts. The interesting results of Rh based catalysts with water can be explained by a production of hydrogen through the water gas shift reaction. The produced hydrogen acts as a more effective reductant than CO for NO removal. Furthermore, in TWCs, Rh is the main component responsible for NOx reduction due to its especially high activity for NO dissociation. Moreover, cerium oxide is a promotor for WGSR.

Keywords: carbon dioxide, environmental chemistry, heterogeneous catalysis

Procedia PDF Downloads 182
900 Designing a Socio-Technical System for Groundwater Resources Management, Applying Smart Energy and Water Meter

Authors: S. Mahdi Sadatmansouri, Maryam Khalili

Abstract:

World, nowadays, encounters serious water scarcity problem. During the past few years, by advent of Smart Energy and Water Meter (SEWM) and its installation at the electro-pumps of the water wells, one had believed that it could be the golden key to address the groundwater resources over-pumping issue. In fact, implementation of these Smart Meters managed to control the water table drawdown for short; but it was not a sustainable approach. SEWM has been considered as law enforcement facility at first; however, for solving a complex socioeconomic problem like shared groundwater resources management, more than just enforcement is required: participation to conserve common resources. The well owners or farmers, as water consumers, are the main and direct stakeholders of this system and other stakeholders could be government sectors, investors, technology providers, privet sectors or ordinary people. Designing a socio-technical system not only defines the role of each stakeholder but also can lubricate the communication to reach the system goals while benefits of each are considered and provided. Farmers, as the key participators for solving groundwater problem, do not trust governments but they would trust a fair system in which responsibilities, privileges and benefits are clear. Technology could help this system remained impartial and productive. Social aspects provide rules, regulations, social objects and etc. for the system and help it to be more human-centered. As the design methodology, Design Thinking provides probable solutions for the challenging problems and ongoing conflicts; it could enlighten the way in which the final system could be designed. Using Human Centered Design approach of IDEO helps to keep farmers in the center of the solution and provides a vision by which stakeholders’ requirements and needs are addressed effectively. Farmers would be considered to trust the system and participate in their groundwater resources management if they find the rules and tools of the system fair and effective. Besides, implementation of the socio-technical system could change farmers’ behavior in order that they concern more about their valuable shared water resources as well as their farm profit. This socio-technical system contains nine main subsystems: 1) Measurement and Monitoring system, 2) Legislation and Governmental system, 3) Information Sharing system, 4) Knowledge based NGOs, 5) Integrated Farm Management system (using IoT), 6) Water Market and Water Banking system, 7) Gamification, 8) Agribusiness ecosystem, 9) Investment system.

Keywords: human centered design, participatory management, smart energy and water meter (SEWM), social object, socio-technical system, water table drawdown

Procedia PDF Downloads 294
899 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 243
898 Assessing Social Sustainability for Biofuels Supply Chains: The Case of Jet Biofuel in Brazil

Authors: Z. Wang, F. Pashaei Kamali, J. A. Posada Duque, P. Osseweijer

Abstract:

Globally, the aviation sector is seeking for sustainable solutions to comply with the pressure to reduce greenhouse gas emissions. Jet fuels derived from biomass are generally perceived as a sustainable alternative compared with their fossil counterparts. However, the establishment of jet biofuels supply chains will have impacts on environment, economy, and society. While existing studies predominantly evaluated environmental impacts and techno-economic feasibility of jet biofuels, very few studies took the social / socioeconomic aspect into consideration. Therefore, this study aims to provide a focused evaluation of social sustainability for aviation biofuels with a supply chain perspective. Three potential jet biofuel supply chains based on different feedstocks, i.e. sugarcane, eucalyptus, and macauba were analyzed in the context of Brazil. The assessment of social sustainability is performed with a process-based approach combined with input-output analysis. Over the supply chains, a set of social sustainability issues including employment, working condition (occupational accident and wage level), labour right, education, equity, social development (GDP and trade balance) and food security were evaluated in a (semi)quantitative manner. The selection of these social issues is based on two criteria: (1) the issues are highly relevant and important to jet biofuel production; (2) methodologies are available for assessing these issues. The results show that the three jet biofuel supply chains lead to a differentiated level of social effects. The sugarcane-based supply chain creates the highest number of jobs whereas the biggest contributor of GDP turns out to be the macauba-based supply chain. In comparison, the eucalyptus-based supply chain stands out regarding working condition. It is also worth noting that biojet fuel supply chain with high level of social benefits could result in high level of social concerns (such as occupational accident, violation of labour right and trade imbalance). Further research is suggested to investigate the possible interactions between different social issues. In addition, the exploration of a wider range of social effects is needed to expand the comprehension of social sustainability for biofuel supply chains.

Keywords: biobased supply chain, jet biofuel, social assessment, social sustainability, socio-economic impacts

Procedia PDF Downloads 266
897 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid

Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov

Abstract:

Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.

Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics

Procedia PDF Downloads 83
896 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery

Authors: Bencherif Kada

Abstract:

In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, diversity, shrublands

Procedia PDF Downloads 128
895 Currently Use Pesticides: Fate, Availability, and Effects in Soils

Authors: Lucie Bielská, Lucia Škulcová, Martina Hvězdová, Jakub Hofman, Zdeněk Šimek

Abstract:

The currently used pesticides represent a broad group of chemicals with various physicochemical and environmental properties which input has reached 2×106 tons/year and is expected to even increases. From that amount, only 1% directly interacts with the target organism while the rest represents a potential risk to the environment and human health. Despite being authorized and approved for field applications, the effects of pesticides in the environment can differ from the model scenarios due to the various pesticide-soil interactions and resulting modified fate and behavior. As such, a direct monitoring of pesticide residues and evaluation of their impact on soil biota, aquatic environment, food contamination, and human health should be performed to prevent environmental and economic damages. The present project focuses on fluvisols as they are intensively used in the agriculture but face to several environmental stressors. Fluvisols develop in the vicinity of rivers by the periodic settling of alluvial sediments and periodic interruptions to pedogenesis by flooding. As a result, fluvisols exhibit very high yields per area unit, are intensively used and loaded by pesticides. Regarding the floods, their regular contacts with surface water arise from serious concerns about the surface water contamination. In order to monitor pesticide residues and assess their environmental and biological impact within this project, 70 fluvisols were sampled over the Czech Republic and analyzed for the total and bioaccessible amounts of 40 various pesticides. For that purpose, methodologies for the pesticide extraction and analysis with liquid chromatography-mass spectrometry technique were developed and optimized. To assess the biological risks, both the earthworm bioaccumulation tests and various types of passive sampling techniques (XAD resin, Chemcatcher, and silicon rubber) were optimized and applied. These data on chemical analysis and bioavailability were combined with the results of soil analysis, including the measurement of basic physicochemical soil properties as well detailed characterization of soil organic matter with the advanced method of diffuse reflectance infrared spectrometry. The results provide unique data on the residual levels of pesticides in the Czech Republic and on the factors responsible for increased pesticide residue levels that should be included in the modeling of pesticide fate and effects.

Keywords: currently used pesticides, fluvisoils, bioavailability, Quechers, liquid-chromatography-mass spectrometry, soil properties, DRIFT analysis, pesticides

Procedia PDF Downloads 465
894 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 273
893 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 199
892 Strategies for Improving and Sustaining Quality in Higher Education

Authors: Anshu Radha Aggarwal

Abstract:

Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.

Keywords: learning outcomes, curriculum development, pedagogy, outcome based education

Procedia PDF Downloads 526
891 Technology Road Mapping in the Fourth Industrial Revolution: A Comprehensive Analysis and Strategic Framework

Authors: Abdul Rahman Hamdan

Abstract:

The Fourth Industrial Revolution (4IR) has brought unprecedented technological advancements that have disrupted many industries worldwide. In keeping up with the technological advances and rapid disruption by the introduction of many technological advancements brought forth by the 4IR, the use of technology road mapping has emerged as one of the critical tools for organizations to leverage. Technology road mapping can be used by many companies to guide them to become more adaptable and anticipate future transformation and innovation, and avoid being redundant or irrelevant due to the rapid changes in technological advancement. This research paper provides a comprehensive analysis of technology road mapping within the context of the 4IR. The objectives of the paper are to provide companies with practical insights and a strategic framework of technology road mapping for them to navigate the fast-changing nature of the 4IR. This study also contributes to the understanding and practice of technology road mapping in the 4IR and, at the same time, provides organizations with the necessary tools and critical insight to navigate the 4IR transformation by leveraging technology road mapping. Based on the literature review and case studies, the study analyses key principles, methodologies, and best practices in technology road mapping and integrates them with the unique characteristics and challenges of the 4IR. The research paper gives the background of the fourth industrial revolution. It explores the disruptive potential of technologies in the 4IR and the critical need for technology road mapping that consists of strategic planning and foresight to remain competitive and relevant in the 4IR era. It also highlights the importance of technology road mapping as an organisation’s proactive approach to align the organisation’s objectives and resources to their technology and product development in meeting the fast-evolving technological 4IR landscape. The paper also includes the theoretical foundations of technology road mapping and examines various methodological approaches, and identifies external stakeholders in the process, such as external experts, stakeholders, collaborative platforms, and cross-functional teams to ensure an integrated and robust technological roadmap for the organisation. Moreover, this study presents a comprehensive framework for technology road mapping in the 4IR by incorporating key elements and processes such as technology assessment, competitive intelligence, risk analysis, and resource allocation. It provides a framework for implementing technology road mapping from strategic planning, goal setting, and technology scanning to road mapping visualisation, implementation planning, monitoring, and evaluation. In addition, the study also addresses the challenges and limitations related to technology roadmapping in 4IR, including the gap analysis. In conclusion of the study, the study will propose a set of practical recommendations for organizations that intend to leverage technology road mapping as a strategic tool in the 4IR in driving innovation and becoming competitive in the current and future ecosystem.

Keywords: technology management, technology road mapping, technology transfer, technology planning

Procedia PDF Downloads 70
890 The Revival of Asakusa Entertainment Streets and Social Conflicts Since Its Inceptive Point, the Post-war Time

Authors: Seung Oh, Satoshi Okada

Abstract:

Today, religious organizations that have long existed alongside local people are being challenged by social changes in the districts they control. The influence of religious organizations is declining everywhere as locals seeking diversity and economic benefits become more interested in developing projects that attract investors and increase market value instead of opting for conservation. Religions whose moral and philosophical stance rejects materialism have a limited capacity to act as agents of local development in modern society. However, in Tokyo, the city’s oldest temple, Senso-Ji played a vital role in the local development of Asakusa, as an entertainment district while nevertheless retaining the area’s traditional character, despite almost complete destruction caused by the Tokyo air raids. The temple played a vigorous role as a mediator between the community and the Tokyo Metropolitan Government as a spokesman for common interests. This research, therefore, examines the social conflicts that Senso-Ji has confronted with regard to the pressures of development of Asakusa on the one hand, and the legitimacy of perpetuating its traditional religious and cultural role in local society on the other. First, this article examines Senso-Ji’s place in society based on its location in the history of Japanese Buddhism, which existed to offer spiritual and practical help to the ordinary people, and to investigate its social legitimacy as a local stakeholder and historical institution. Second, this paper considers the impact of the social changes that Asakusa had undergone during the Meiji and Taisho eras, by examining the social conflicts and changes in the Asakusa entertainment district, taking the Tokyo Air Raids as the Inceptive Point (IP). Third, it reconsiders how Senso-Ji responded to today’s growth-oriented local developments, as proposed by Tokyo’s Metropolitan planning authorities along lines commonly seen in all cities. Studying the role of Senso-Ji in the development of Asakusa can serve as a case study to justify the involvement of religious institutions in local issues and as a useful and practical example of progressive development which nevertheless permitted conservation of traditional features, as a result of pressure from social groups in a way that may be useful for other places facing similar problems.

Keywords: Architecture, Urban Design, Urban Planning, Preservation, Conservation, Social Science

Procedia PDF Downloads 27
889 Mapping Forest Biodiversity Using Remote Sensing and Field Data in the National Park of Tlemcen (Algeria)

Authors: Bencherif Kada

Abstract:

In forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects, and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction and area of an object, etc.) and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants and bare soils. Texture attributes seem to provide no useful information while spatial attributes of shape, compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, biodiversity, shrublands

Procedia PDF Downloads 33
888 Performance Evaluation of On-Site Sewage Treatment System (Johkasou)

Authors: Aashutosh Garg, Ankur Rajpal, A. A. Kazmi

Abstract:

The efficiency of an on-site wastewater treatment system named Johkasou was evaluated based on its pollutant removal efficiency over 10 months. This system was installed at IIT Roorkee and had a capacity of treating 7 m3/d of sewage water, sufficient for a group of 30-50 people. This system was fed with actual wastewater through an equalization tank to eliminate the fluctuations throughout the day. Methanol and ammonium chloride was added into this equalization tank to increase the Chemical Oxygen Demand (COD) and ammonia content of the influent. The outlet from Johkasou is sent to a tertiary unit consisting of a Pressure Sand Filter and an Activated Carbon Filter for further treatment. Samples were collected on alternate days from Monday to Friday and the following parameters were evaluated: Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN). The Average removal efficiency for Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN) was observed as 89.6, 97.7, 96, and 80% respectively. The cost of treating the wastewater comes out to be Rs 23/m3 which includes electricity, cleaning and maintenance, chemical, and desludging costs. Tests for the coliforms were also performed and it was observed that the removal efficiency for total and fecal coliforms was 100%. The sludge generation rate is approximately 20% of the BOD removal and it needed to be removed twice a year. It also showed a very good response against the hydraulic shock load. We performed vacation stress analysis on the system to evaluate the performance of the system when there is no influent for 8 consecutive days. From the result of stress analysis, we concluded that system needs a recovery time of about 48 hours to stabilize. After about 2 days, the system returns again to original conditions and all the parameters in the effluent become within the limits of National Green Tribunal (NGT) standards. We also performed another stress analysis to save the electricity in which we turned the main aeration blower off for 2 to 12 hrs a day and the results showed that we can turn the blower off for about 4-6 hrs a day and this will help in reducing the electricity costs by about 25%. It was concluded that the Johkasou system can remove a sufficient amount of all the physiochemical parameters tested to satisfy the prescribed limit set as per Indian Standard.

Keywords: on-site treatment, domestic wastewater, Johkasou, nutrient removal, pathogens removal

Procedia PDF Downloads 115
887 The Effect of Corporate Governance to Islamic Banking Performance Using Maqasid Index Approach in Indonesia

Authors: Audia Syafa'atur Rahman, Rozali Haron

Abstract:

The practices of Islamic banking are more attuned to the goals of profit maximization rather than obtaining ethical profit. Ethical profit is obtained from interest-free earnings and to give an impact which benefits to the growth of society and economy. Good corporate governance practices are needed to assure the sustainability of Islamic banks in order to achieve Maqasid Shariah with the main purpose of boosting the well-being of people. The Maqasid Shariah performance measurement is used to measure the duties and responsibilities expected to be performed by Islamic banks. It covers not only unification dimension like financial measurement, but also many dimensions covered to reflect the main purpose of Islamic banks. The implementation of good corporate governance is essential because it covers the interests of the stakeholders and facilitates effective monitoring to encourage Islamic banks to utilize resources more efficiently in order to achieve the Maqasid Shariah. This study aims to provide the empirical evidence on the Maqasid performance of Islamic banks in relation to the Maqasid performance evaluation model, to examine the influence of SSB characteristics and board structures to Islamic Banks performance as measured by Maqasid performance evaluation model. By employing the simple additive weighting method, Maqasid index for all the Islamic Banks in Indonesia within 2012 to 2016 ranged from above 11% to 28%. The Maqasid Syariah performance index where results reached above 20% are obtained by Islamic Banks such as Bank Muamalat Indonesia, Bank Panin Syariah, and Bank BRI Syariah. The consistent achievement above 23% is achieved by BMI. Other Islamic Banks such as Bank Victoria Syariah, Bank Jabar Banten Syariah, Bank BNI Syariah, Bank Mega Syariah, BCA Syariah, and Maybank Syariah Indonesia shows a fluctuating value of the Maqasid performance index every year. The impact of SSB characteristics and board structures are tested using random-effects generalized least square. The findings indicate that SSB characteristics (Shariah Supervisory Board size, Shariah Supervisory Board cross membership, Shariah Supervisory Board Education, and Shariah Supervisory Board reputation) and board structures (Board size and Board independence) have an essential role in improving the performance of Islamic Banks. The findings denote Shariah Supervisory Board with smaller size, higher portion of Shariah Supervisory Board cross membership; lesser Shariah Supervisory Board holds doctorate degree, lesser reputable scholar, more members on board of directors, and less independence non-executive directors will enhance the performance of Islamic Banks.

Keywords: Maqasid Shariah, corporate governance, Islamic banks, Shariah supervisory board

Procedia PDF Downloads 242