Search results for: efficient use of plasma
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5841

Search results for: efficient use of plasma

3621 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 369
3620 Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models

Authors: Sam Khozama, Ali M. Mayya

Abstract:

Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models.

Keywords: machine learning, deep learning, cancer prediction, breast cancer, LSTM, fusion

Procedia PDF Downloads 155
3619 Reducing Energy Consumption and GHG Emission by Integration of Flare Gas with Fuel Gas Network in Refinery

Authors: N. Tahouni, M. Gholami, M. H. Panjeshahi

Abstract:

Gas flaring is one of the most GHG emitting sources in the oil and gas industries. It is also a major way for wasting such an energy that could be better utilized and even generates revenue. Minimize flaring is an effective approach for reducing GHG emissions and also conserving energy in flaring systems. Integrating waste and flared gases into the fuel gas networks (FGN) of refineries is an efficient tool. A fuel gas network collects fuel gases from various source streams and mixes them in an optimal manner, and supplies them to different fuel sinks such as furnaces, boilers, turbines, etc. In this article we use fuel gas network model proposed by Hasan et al. as a base model and modify some of its features and add constraints on emission pollution by gas flaring to reduce GHG emissions as possible. Results for a refinery case study showed that integration of flare gas stream with waste and natural gas streams to construct an optimal FGN can significantly reduce total annualized cost and flaring emissions.

Keywords: flaring, fuel gas network, GHG emissions, stream

Procedia PDF Downloads 333
3618 A Generic Metamodel for Dependability Analysis

Authors: Moomen Chaari, Wolfgang Ecker, Thomas Kruse, Bogdan-Andrei Tabacaru

Abstract:

In our daily life, we frequently interact with complex systems which facilitate our mobility, enhance our access to information, and sometimes help us recover from illnesses or diseases. The reliance on these systems is motivated by the established evaluation and assessment procedures which are performed during the different phases of the design and manufacturing flow. Such procedures are aimed to qualify the system’s delivered services with respect to their availability, reliability, safety, and other properties generally referred to as dependability attributes. In this paper, we propose a metamodel based generic characterization of dependability concepts and describe an automation methodology to customize this characterization to different standards and contexts. When integrated in concrete design and verification environments, the proposed methodology promotes the reuse of already available dependability assessment tools and reduces the costs and the efforts required to create consistent and efficient artefacts for fault injection or error simulation.

Keywords: dependability analysis, model-driven development, metamodeling, code generation

Procedia PDF Downloads 484
3617 Performance, Need and Discriminatory Allegiance of Employees as Awarding Criteria of Distributive Justice

Authors: B. Gangloff, L. Mayoral, A. Rezrazi

Abstract:

Three types of salary distribution are usually proposed by the theorists of distributive justice: Equality, equity and need. Their influence has been studied, taking into consideration (in terms of equity) the performance of the employees and their degree of allegiance/rebellion in what regards discriminatory hierarchical orders, by taking into account the reasons of such allegiance/rebellion (allegiance out of conviction, legalism or opportunism/ethical rebellion). Conducted in Argentina, the study has confronted 480 students (240 male and 240 female) with a practical case in which they had to advise a manager of a real estate agency on the allocation of a bonus amongst his employees. The latter were characterized according to their respective performance, one of them being further defined as being (or not) in a financial need and as having complied (or not) with a discriminatory hierarchical order regarding foreigners. The results show that the distribution of the bonus only follows the rules of equity and need: The employees more efficient, allegiant or in need, are rewarded more than the others. It is also noteworthy that the allegiant employees are rewarded in the same way, regardless of the reason for their allegiance, and that the employee who refuses to adopt a discriminatory conduct is penalized.

Keywords: distributive justice, equity, performance, allegiance, ethics

Procedia PDF Downloads 293
3616 Detection of Adulterants in Milk Using IoT

Authors: Shaik Mohammad Samiullah Shariff, Siva Sreenath, Sai Haripriya, Prathyusha, M. Padma Lalitha

Abstract:

The Internet of Things (IoT) is the emerging technology that has been utilized to extend the possibilities for smart dairy farming (SDF). Milk consumption is continually increasing due to the world's growing population. As a result, some providers are prone to using dishonest measures to close the supply-demand imbalance, such as adding adulterants to milk. To identify the presence of adulterants in milk, traditional testing methods necessitate the use of particular chemicals and equipment. While efficient, this method has the disadvantage of yielding difficult and time-consuming qualitative results. Furthermore, same milk sample cannot be tested for other adulterants later. As a result, this study proposes an IoT-based approach for identifying adulterants in milk by measuring electrical conductivity (EC) or Total Dissolved Solids (TDS) and PH. In order to achieve this, an Arduino UNO microcontroller is used to assess the contaminants. When there is no adulteration, the pH and TDS values of milk range from 6.45 to 6.67 and 750 to 780ppm, respectively, according to this study. Finally, the data is uploaded to the cloud via an IoT device attached to the Ubidot web platform.

Keywords: internet of things (IoT), pH sensor, TDS sensor, EC sensor, industry 4.0

Procedia PDF Downloads 75
3615 How to Perform Proper Indexing?

Authors: Watheq Mansour, Waleed Bin Owais, Mohammad Basheer Kotit, Khaled Khan

Abstract:

Efficient query processing is one of the utmost requisites in any business environment to satisfy consumer needs. This paper investigates the various types of indexing models, viz. primary, secondary, and multi-level. The investigation is done under the ambit of various types of queries to which each indexing model performs with efficacy. This study also discusses the inherent advantages and disadvantages of each indexing model and how indexing models can be chosen based on a particular environment. This paper also draws parallels between various indexing models and provides recommendations that would help a Database administrator to zero-in on a particular indexing model attributed to the needs and requirements of the production environment. In addition, to satisfy industry and consumer needs attributed to the colossal data generation nowadays, this study has proposed two novel indexing techniques that can be used to index highly unstructured and structured Big Data with efficacy. The study also briefly discusses some best practices that the industry should follow in order to choose an indexing model that is apposite to their prerequisites and requirements.

Keywords: indexing, hashing, latent semantic indexing, B-tree

Procedia PDF Downloads 153
3614 Techno-Economic Assessments of Promising Chemicals from a Sugar Mill Based Biorefinery

Authors: Kathleen Frances Haigh, Mieke Nieder-Heitmann, Somayeh Farzad, Mohsen Ali Mandegari, Johann Ferdinand Gorgens

Abstract:

Lignocellulose can be converted to a range of biochemicals and biofuels. Where this is derived from agricultural waste, issues of competition with food are virtually eliminated. One such source of lignocellulose is the South African sugar industry. Lignocellulose could be accessed by changes to the current farming practices and investments in more efficient boilers. The South African sugar industry is struggling due to falling sugar prices and increasing costs and it is proposed that annexing a biorefinery to a sugar mill will broaden the product range and improve viability. Process simulations of the selected chemicals were generated using Aspen Plus®. It was envisaged that a biorefinery would be annexed to a typical South African sugar mill. Bagasse would be diverted from the existing boilers to the biorefinery and mixed with harvest residues. This biomass would provide the feedstock for the biorefinery and the process energy for the biorefinery and sugar mill. Thus, in all scenarios a portion of the biomass was diverted to a new efficient combined heat and power plant (CHP). The Aspen Plus® simulations provided the mass and energy balance data to carry out an economic assessment of each scenarios. The net present value (NPV), internal rate of return (IRR) and minimum selling price (MSP) was calculated for each scenario. As a starting point scenarios were generated to investigate the production of ethanol, ethanol and lactic acid, ethanol and furfural, butanol, methanol, and Fischer-Tropsch syncrude. The bypass to the CHP plant is a useful indicator of the energy demands of the chemical processes. An iterative approach was used to identify a suitable bypass because increasing this value had the combined effect of increasing the amount of energy available and reducing the capacity of the chemical plant. Bypass values ranged from 30% for syncrude production to 50% for combined ethanol and furfural production. A hurdle rate of 15.7% was selected for the IRR. The butanol, combined ethanol and furfural, or the Fischer-Tropsch syncrude scenarios are unsuitable for investment with IRRs of 4.8%, 7.5% and 11.5% respectively. This provides valuable insights into research opportunities. For example furfural from sugarcane bagasse is an established process although the integration of furfural production with ethanol is less well understood. The IRR for the ethanol scenario was 14.7%, which is below the investment criteria, but given the technological maturity it may still be considered for investment. The scenarios which met the investment criteria were the combined ethanol and lactic acid, and the methanol scenarios with IRRs of 20.5% and 16.7%, respectively. These assessments show that the production of biochemicals from lignocellulose can be commercially viable. In addition, this assessment have provided valuable insights for research to improve the commercial viability of additional chemicals and scenarios. This has led to further assessments of the production of itaconic acid, succinic acid, citric acid, xylitol, polyhydroxybutyrate, polyethylene, glucaric acid and glutamic acid.

Keywords: biorefineries, sugar mill, methanol, ethanol

Procedia PDF Downloads 189
3613 Positive Effects of Natural Gas Usage on Air Pollution

Authors: Ismail Becenen

Abstract:

Air pollution, a consequence of urbanization brought about by modern life, is as global as it is local and regional. Because of the adverse effects of air pollution on human health, air quality is given importance all over the world. According to the decision of the World Health Organization, clean air is the basic necessity for human health and well-being. It poses a very high risk especially for heart diseases and stroke cases. In this study, the positive effects of natural gas usage on air pollution in cities are explained by using literature scans and air pollution measurement values. Natural gas is cleaner than other types of fuel. It contains less sulfur and organic sulfur compounds. When natural gas burns, it does not leave ashes, it does not cause problems in the rubbish mountains. It's a clean fuel, it easily burns and shines. It is a burning gas that is easy and efficient. In addition, there is not a toxic effect for people in case of inhalation. As a result, the use of natural gas needs to be widespread to reduce air pollution around the world in order to provide a healthier life for people and the environment.

Keywords: natural gas, air pollution, sulfur dioxide, particulate matter, energy

Procedia PDF Downloads 190
3612 Biomimetic Building Envelopes to Reduce Energy Consumption in Hot and Dry Climates

Authors: Aswitha Bachala

Abstract:

Energy shortage became a worldwide major problem since the 1970s, due to high energy consumption. Buildings are the primary energy users which consume 40% of global energy consumption, in which, 40%-50% of building’s energy usage is consumed due to its envelope. In hot and dry climates, 40% of energy is consumed only for cooling purpose, which implies major portion of energy savings can be worked through the envelopes. Biomimicry can be one solution for extracting efficient thermoregulation strategies found in nature. This paper aims to identify different biomimetic building envelopes which shall offer a higher potential to reduce energy consumption in hot and dry climates. It focuses on investigating the scope for reducing energy consumption through biomimetic approach in terms of envelopes. An in-depth research on different biomimetic building envelopes will be presented and analyzed in terms of heat absorption, in addition to, the impact it had on reducing the buildings energy consumption. This helps to understand feasible biomimetic building envelopes to mitigate heat absorption in hot and dry climates.

Keywords: biomimicry, building envelopes, energy consumption, hot and dry climate

Procedia PDF Downloads 208
3611 Improvement on a CNC Gantry Machine Structure Design for Higher Machining Speed Capability

Authors: Ahmed A. D. Sarhan, S. R. Besharaty, Javad Akbaria, M. Hamdi

Abstract:

The capability of CNC gantry milling machines in manufacturing long components has caused the expanded use of such machines. On the other hand, the machines’ gantry rigidity can reduce under severe loads or vibration during operation. Indeed, the quality of machining is dependent on the machine’s dynamic behavior throughout the operating process. For this reason, this type of machines has always been used prudently and are non efficient. Therefore, they can usually be employed for rough machining and may not produce adequate surface finishing. In this paper, a CNC gantry milling machine with the potential to produce good surface finish has been designed and analyzed. The lowest natural frequency of this machine is 202 Hz at all motion amplitudes with a full range of suitable frequency responses. Meanwhile, the maximum deformation under dead loads for the gantry machine is 0.565µm, indicating that this machine tool is capable of producing higher product quality.

Keywords: frequency response, finite element, gantry machine, gantry design, static and dynamic analysis

Procedia PDF Downloads 350
3610 Impact of the Oxygen Content on the Optoelectronic Properties of the Indium-Tin-Oxide Based Transparent Electrodes for Silicon Heterojunction Solar Cells

Authors: Brahim Aissa

Abstract:

Transparent conductive oxides (TCOs) used as front electrodes in solar cells must feature simultaneously high electrical conductivity, low contact resistance with the adjacent layers, and an appropriate refractive index for maximal light in-coupling into the device. However, these properties may conflict with each other, motivating thereby the search for TCOs with high performance. Additionally, due to the presence of temperature sensitive layers in many solar cell designs (for example, in thin-film silicon and silicon heterojunction (SHJ)), low-temperature deposition processes are more suitable. Several deposition techniques have been already explored to fabricate high-mobility TCOs at low temperatures, including sputter deposition, chemical vapor deposition, and atomic layer deposition. Among this variety of methods, to the best of our knowledge, magnetron sputtering deposition is the most established technique, despite the fact that it can lead to damage of underlying layers. The Sn doped In₂O₃ (ITO) is the most commonly used transparent electrode-contact in SHJ technology. In this work, we studied the properties of ITO thin films grown by RF sputtering. Using different oxygen fraction in the argon/oxygen plasma, we prepared ITO films deposited on glass substrates, on one hand, and on a-Si (p and n-types):H/intrinsic a-Si/glass substrates, on the other hand. Hall Effect measurements were systematically conducted together with total-transmittance (TT) and total-reflectance (TR) spectrometry. The electrical properties were drastically affected whereas the TT and TR were found to be slightly impacted by the oxygen variation. Furthermore, the time of flight-secondary ion mass spectrometry (TOF-SIMS) technique was used to determine the distribution of various species throughout the thickness of the ITO and at various interfaces. The depth profiling of indium, oxygen, tin, silicon, phosphorous, boron and hydrogen was investigated throughout the various thicknesses and interfaces, and obtained results are discussed accordingly. Finally, the extreme conditions were selected to fabricate rear emitter SHJ devices, and the photovoltaic performance was evaluated; the lower oxygen flow ratio was found to yield the best performance attributed to lower series resistance.

Keywords: solar cell, silicon heterojunction, oxygen content, optoelectronic properties

Procedia PDF Downloads 151
3609 Sulforaphane Alleviates Muscular Dystrophy in Mdx Mice by Activation of Nrf2

Authors: Chengcao Sun, Cuili Yang, Shujun Li, Ruilin Xue, Liang Wang, Yongyong Xi, Dejia Li

Abstract:

Backgrounds: Sulforaphane, one of the most important isothiocyanates in the human diet, is known to have chemopreventive and antioxidant activities in different tissues via activation of NF-E2-related factor 2 (Nrf2)-mediated induction of antioxidant/phase II enzymes, such as heme oxygenase-1 (HO-1) and NAD(P)H quinone oxidoreductase 1 (NQO1). However, its effects on muscular dystrophy remain unknown. This work was undertaken to evaluate the effects of Sulforaphane on Duchenne muscular dystrophy (DMD). Methods: 4-week-old mdx mice were treated with SFN by gavage (2 mg/kg body weight per day) for 8 weeks. Blood was collected from eye socket every week, and tibial anterior, extensor digitorum longus, gastrocnemius, soleus, triceps brachii muscles and heart samples were collected after 8-week gavage. Force measurements and mice exercise capacity assays were detected. GSH/GSSG ratio, TBARS, CK and LDH levels were analyzed by spectrophotometric methods. H&E staining was used to analyze histological and morphometric of skeletal muscles of mdx mice, and Evas blue dye staining was made to detect sarcolemmal integrity of mdx mice. Further, the role of Sulforaphane on Nrf2/ARE signaling pathway was analyzed by ELISA, western blot and qRT-PCR. Results: Our results demonstrated that SFN treatment increased the expression and activity of muscle phase II enzymes NQO1 and HO-1 with Nrf2 dependent manner. SFN significantly increased skeletal muscle mass, muscle force (~30%), running distance (~20%) and GSH/GSSG ratio (~3.2 folds) of mdx mice, and decreased the activities of plasma creatine phosphokinase (CK) (~45%) and lactate dehydrogenase (LDH) (~40%), gastrocnemius hypertrophy (~25%), myocardial hypertrophy (~20%) and MDA levels (~60%). Further, SFN treatment also reduced the central nucleation (~40%), fiber size variability, inflammation and improved the sarcolemmal integrity of mdx mice. Conclusions: Collectively, these results show that SFN can improve muscle function, pathology and protect dystrophic muscle from oxidative damage in mdx mice through Nrf2 signaling pathway, which indicate Nrf2 may have clinical implications for the treatment of patients with muscular dystrophy.

Keywords: sulforaphane, duchenne muscular dystrophy, Nrf2, oxidative stress

Procedia PDF Downloads 319
3608 Neural Nets Based Approach for 2-Cells Power Converter Control

Authors: Kamel Laidi, Khelifa Benmansour, Ouahid Bouchhida

Abstract:

Neural networks-based approach for 2-cells serial converter has been developed and implemented. The approach is based on a behavioural description of the different operating modes of the converter. Each operating mode represents a well-defined configuration, and for which is matched an operating zone satisfying given invariance conditions, depending on the capacitors' voltages and the load current of the converter. For each mode, a control vector whose components are the control signals to be applied to the converter switches has been associated. Therefore, the problem is reduced to a classification task of the different operating modes of the converter. The artificial neural nets-based approach, which constitutes a powerful tool for this kind of task, has been adopted and implemented. The application to a 2-cells chopper has allowed ensuring efficient and robust control of the load current and a high capacitors voltages balancing.

Keywords: neural nets, control, multicellular converters, 2-cells chopper

Procedia PDF Downloads 827
3607 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 129
3606 Efficient Utilization of Commodity Computers in Academic Institutes: A Cloud Computing Approach

Authors: Jasraj Meena, Malay Kumar, Manu Vardhan

Abstract:

Cloud computing is a new technology in industry and academia. The technology has grown and matured in last half decade and proven their significant role in changing environment of IT infrastructure where cloud services and resources are offered over the network. Cloud technology enables users to use services and resources without being concerned about the technical implications of technology. There are substantial research work has been performed for the usage of cloud computing in educational institutes and majority of them provides cloud services over high-end blade servers or other high-end CPUs. However, this paper proposes a new stack called “CiCKAStack” which provide cloud services over unutilized computing resources, named as commodity computers. “CiCKAStack” provides IaaS and PaaS using underlying commodity computers. This will not only increasing the utilization of existing computing resources but also provide organize file system, on demand computing resource and design and development environment.

Keywords: commodity computers, cloud-computing, KVM, CloudStack, AppScale

Procedia PDF Downloads 267
3605 The Efficiency Analysis in the Health Sector: Marmara Region

Authors: Hale Kirer Silva Lecuna, Beyza Aydin

Abstract:

Health is one of the main components of human capital and sustainable development, and it is very important for economic growth. Health economics, which is an indisputable part of the science of economics, has five stages in general. These are health and development, financing of health services, economic regulation in the health, allocation of resources and efficiency of health services. A well-developed and efficient health sector plays a major role by increasing the level of development of countries. The most crucial pillars of the health sector are the hospitals that are divided into public and private. The main purpose of the hospitals is to provide more efficient services. Therefore the aim is to meet patients’ satisfaction by increasing the service quality. Health-related studies in Turkey date back to the Ottoman and Seljuk Empires. In the near past, Turkey applied 'Health Sector Transformation Programs' under different titles between 2003 and 2010. Our aim in this paper is to measure how effective these transformation programs are for the health sector, to see how much they can increase the efficiency of hospitals over the years, to see the return of investments, to make comments and suggestions on the results, and to provide a new reference for the literature. Within this framework, the public and private hospitals in Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, Istanbul, Kirklareli, Kocaeli, Sakarya, Tekirdağ, Yalova will be examined by using Data Envelopment Analysis (DEA) for the years between 2000 and 2019. DEA is a linear programming-based technique, which gives relatively good results in multivariate studies. DEA basically estimates an efficiency frontier and make a comparison. Constant returns to scale and variable returns to scale are two most commonly used DEA methods. Both models are divided into two as input and output-oriented. To analyze the data, the number of personnel, number of specialist physicians, number of practitioners, number of beds, number of examinations will be used as input variables; and the number of surgeries, in-patient ratio, and crude mortality rate as output variables. 11 hospitals belonging to the Marmara region were included in the study. It is seen that these hospitals worked effectively only in 7 provinces (Balıkesir, Bilecik, Bursa, Edirne, İstanbul, Kırklareli, Yalova) for the year 2001 when no transformation program was implemented. After the transformation program was implemented, for example, in 2014 and 2016, 10 hospitals (Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, İstanbul, Kocaeli, Kırklareli, Tekirdağ, Yalova) were found to be effective. In 2015, ineffective results were observed for Sakarya, Tekirdağ and Yalova. However, since these values are closer to 1 after the transformation program, we can say that the transformation program has positive effects. For Sakarya alone, no effective results have been achieved in any year. When we look at the results in general, it shows that the transformation program has a positive effect on the effectiveness of hospitals.

Keywords: data envelopment analysis, efficiency, health sector, Marmara region

Procedia PDF Downloads 125
3604 Multi-Objective Simulated Annealing Algorithms for Scheduling Just-In-Time Assembly Lines

Authors: Ghorbanali Mohammadi

Abstract:

New approaches to sequencing mixed-model manufacturing systems are present. These approaches have attracted considerable attention due to their potential to deal with difficult optimization problems. This paper presents Multi-Objective Simulated Annealing Algorithms (MOSAA) approaches to the Just-In-Time (JIT) sequencing problem where workload-smoothing (WL) and the number of set-ups (St) are to be optimized simultaneously. Mixed-model assembly lines are types of production lines where varieties of product models similar in product characteristics are assembled. Moreover, this type of problem is NP-hard. Two annealing methods are proposed to solve the multi-objective problem and find an efficient frontier of all design configurations. The performances of the two methods are tested on several problems from the literature. Experimentation demonstrates the relative desirable performance of the presented methodology.

Keywords: scheduling, just-in-time, mixed-model assembly line, sequencing, simulated annealing

Procedia PDF Downloads 122
3603 Ultrastructural Characterization of Lipid Droplets of Rat Hepatocytes after Whole Body 60-Cobalt Gamma Radiation

Authors: Ivna Mororó, Lise P. Labéjof, Stephanie Ribeiro, Kely Almeida

Abstract:

Lipid droplets (LDs) are normally presented in greater or lesser number in the cytoplasm of almost all eukaryotic and some prokaryotic cells. They are independent organelles composed of a lipid ester core and a surface phospholipid monolayer. As a lipid storage form, they provide an available source of energy for the cell. Recently it was demonstrated that they play an important role in other many cellular processes. Among the many unresolved questions about them, it is not even known how LDs is formed, how lipids are recruited to LDs and how they interact with the other organelles. Excess fat in the organism is pathological and often associated with the development of some genetic, hormonal or behavioral diseases. The formation and accumulation of lipid droplets in the cytoplasm can be increased by exogenous physical or chemical agents. It is well known that ionizing radiation affects lipid metabolism resulting in increased lipogenesis in cells, but the details of this process are unknown. To better understand the mode of formation of LDs in liver cells, we investigate their ultrastructural morphology after irradiation. For that, Wistar rats were exposed to whole body gamma radiation from 60-cobalt at various single doses. Samples of the livers were processed for analysis under a conventional transmission electron microscope. We found that when compared to controls, morphological changes in liver cells were evident at the higher doses of radiation used. It was detected a great number of lipid droplets of different sizes and homogeneous content and some of them merged each other. In some cells, it was observed diffused LDs, not limited by a monolayer of phospholipids. This finding suggests that the phospholipid monolayer of the LDs was disrupted by ionizing radiation exposure that promotes lipid peroxydation of endo membranes. Thus the absence of the phospholipid monolayer may prevent the realization of some cellular activities as follow: - lipid exocytosis which requires the merging of LDs membrane with the plasma membrane; - the interaction of LDs with other membrane-bound organelles such as the endoplasmic reticulum (ER), the golgi and mitochondria and; - lipolysis of lipid esters contained in the LDs which requires the presence of enzymes located in membrane-bound organelles as ER. All these impediments can contribute to lipid accumulation in the cytoplasm and the development of diseases such as liver steatosis, cirrhosis and cancer.

Keywords: radiobiology, hepatocytes, lipid metabolism, transmission electron microscopy

Procedia PDF Downloads 306
3602 Intelligent Electric Vehicle Charging System (IEVCS)

Authors: Prateek Saxena, Sanjeev Singh, Julius Roy

Abstract:

The security of the power distribution grid remains a paramount to the utility professionals while enhancing and making it more efficient. The most serious threat to the system can be maintaining the transformers, as the load is ever increasing with the addition of elements like electric vehicles. In this paper, intelligent transformer monitoring and grid management has been proposed. The engineering is done to use the evolving data from the smart meter for grid analytics and diagnostics for preventive maintenance. The two-tier architecture for hardware and software integration is coupled to form a robust system for the smart grid. The proposal also presents interoperable meter standards for easy integration. Distribution transformer analytics based on real-time data benefits utilities preventing outages, protects the revenue loss, improves the return on asset and reduces overall maintenance cost by predictive monitoring.

Keywords: electric vehicle charging, transformer monitoring, data analytics, intelligent grid

Procedia PDF Downloads 781
3601 FESA: Fuzzy-Controlled Energy-Efficient Selective Allocation and Reallocation of Tasks Among Mobile Robots

Authors: Anuradha Banerjee

Abstract:

Energy aware operation is one of the visionary goals in the area of robotics because operability of robots is greatly dependent upon their residual energy. Practically, the tasks allocated to robots carry different priority and often an upper limit of time stamp is imposed within which the task needs to be completed. If a robot is unable to complete one particular task given to it the task is reallocated to some other robot. The collection of robots is controlled by a Central Monitoring Unit (CMU). Selection of the new robot is performed by a fuzzy controller called Task Reallocator (TRAC). It accepts the parameters like residual energy of robots, possibility that the task will be successfully completed by the new robot within stipulated time, distance of the new robot (where the task is reallocated) from distance of the old one (where the task was going on) etc. The proposed methodology increases the probability of completing globally assigned tasks and saves huge amount of energy as far as the collection of robots is concerned.

Keywords: energy-efficiency, fuzzy-controller, priority, reallocation, task

Procedia PDF Downloads 307
3600 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 287
3599 Critical Analysis of Heat Exchanger Cycle for its Maintainability Using Failure Modes and Effect Analysis and Pareto Analysis

Authors: Sayali Vyas, Atharva Desai, Shreyas Badave, Apurv Kulkarni, B. Rajiv

Abstract:

The Failure Modes and Effect Analysis (FMEA) is an efficient evaluation technique to identify potential failures in products, processes, and services. FMEA is designed to identify and prioritize failure modes. It proves to be a useful method for identifying and correcting possible failures at its earliest possible level so that one can avoid consequences of poor performance. In this paper, FMEA tool is used in detection of failures of various components of heat exchanger cycle and to identify critical failures of the components which may hamper the system’s performance. Further, a detailed Pareto analysis is done to find out the most critical components of the cycle, the causes of its failures, and possible recommended actions. This paper can be used as a checklist which will help in maintainability of the system.

Keywords: FMEA, heat exchanger cycle, Ishikawa diagram, pareto analysis, RPN (Risk Priority Number)

Procedia PDF Downloads 398
3598 Intended and Unintended Outcomes of Partnerships at the Local Level in Slovakia

Authors: Daniel Klimovský

Abstract:

Slovakia belongs to the most fragmented countries if one looks at its local government structure. The Slovak central governments implemented both broad devolution and fiscal decentralization some decades ago. However, neither territorial consolidation nor size categorization of local competences and powers has been implemented yet. Taking this fact into account, it is clear that the local governments are challenged not only by their citizens as customers but also by effectiveness as well as efficiency of delivered services. The paper is focused on behavior of the local governments in Slovakia and their approaches towards other local partners, including other local governments. Analysis of set of interviews shows that inter-municipal cooperation is the most common local partnership in Slovakia, but due to diversity of the local governments, this kind of cooperation leads to both intended and unintended outcomes. While in many cases the local governments are more efficient as well as effective in delivery of local services thanks to inter-municipal cooperation, there are many cases where inter-municipal cooperation fails, and it brings rather questionable or even negative outcomes.

Keywords: local governments, local partnerships, inter-municipal cooperation, delivery of local services

Procedia PDF Downloads 253
3597 An Approach to Electricity Production Utilizing Waste Heat of a Triple-Pressure Cogeneration Combined Cycle Power Plant

Authors: Soheil Mohtaram, Wu Weidong, Yashar Aryanfar

Abstract:

This research investigates the points with heat recovery potential in a triple-pressure cogeneration combined cycle power plant and determines the amount of waste heat that can be recovered. A modified cycle arrangement is then adopted for accessing thermal potentials. Modeling the energy system is followed by thermodynamic and energetic evaluation, and then the price of the manufactured products is also determined using the Total Revenue Requirement (TRR) method and term economic analysis. The results of optimization are then presented in a Pareto chart diagram by implementing a new model with dual objective functions, which include power cost and produce heat. This model can be utilized to identify the optimal operating point for such power plants based on electricity and heat prices in different regions.

Keywords: heat loss, recycling, unused energy, efficient production, optimization, triple-pressure cogeneration

Procedia PDF Downloads 75
3596 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density

Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany

Abstract:

Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.

Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination

Procedia PDF Downloads 257
3595 A Concept Analysis of Control over Nursing Practice

Authors: Oznur Ispir, S. Duygulu

Abstract:

Health institutions are the places where fast and efficient decisions are required and mistakes and uncertainties are not tolerated due to the urgency of the services provided within the body of these institutions. Thus, in those institutions where patient care services are targeted to be provided quality and safety, the nurses attending the decisions, creating the solutions for problems, taking initiative and bearing the responsibility of results in brief having the control over practices are needed. Control over nursing practices is defined as affecting the employment and work environment at the unit level of the institution, perceived freedom for organizing and evaluating nursing practices, the ability to make independent decisions about patient care and accountability for the results of such decisions. This study scrutinizes the concept of control over nursing practices (organizational autonomy), which is frequently confused with other concepts (autonomy) in the literature, by reviewing the literature and making suggestions to improve nurses’ control over nursing practices.

Keywords: control over nursing practice, nurse, nursing, organizational autonomy

Procedia PDF Downloads 264
3594 A Review Paper on Data Security in Precision Agriculture Using Internet of Things

Authors: Tonderai Muchenje, Xolani Mkhwanazi

Abstract:

Precision agriculture uses a number of technologies, devices, protocols, and computing paradigms to optimize agricultural processes. Big data, artificial intelligence, cloud computing, and edge computing are all used to handle the huge amounts of data generated by precision agriculture. However, precision agriculture is still emerging and has a low level of security features. Furthermore, future solutions will demand data availability and accuracy as key points to help farmers, and security is important to build robust and efficient systems. Since precision agriculture comprises a wide variety and quantity of resources, security addresses issues such as compatibility, constrained resources, and massive data. Moreover, conventional protection schemes used in the traditional internet may not be useful for agricultural systems, creating extra demands and opportunities. Therefore, this paper aims at reviewing state of the art of precision agriculture security, particularly in open field agriculture, discussing its architecture, describing security issues, and presenting the major challenges and future directions.

Keywords: precision agriculture, security, IoT, EIDE

Procedia PDF Downloads 85
3593 Cloud Monitoring and Performance Optimization Ensuring High Availability and Security

Authors: Inayat Ur Rehman, Georgia Sakellari

Abstract:

Cloud computing has evolved into a vital technology for businesses, offering scalability, flexibility, and cost-effectiveness. However, maintaining high availability and optimal performance in the cloud is crucial for reliable services. This paper explores the significance of cloud monitoring and performance optimization in sustaining the high availability of cloud-based systems. It discusses diverse monitoring tools, techniques, and best practices for continually assessing the health and performance of cloud resources. The paper also delves into performance optimization strategies, including resource allocation, load balancing, and auto-scaling, to ensure efficient resource utilization and responsiveness. Addressing potential challenges in cloud monitoring and optimization, the paper offers insights into data security and privacy considerations. Through this thorough analysis, the paper aims to underscore the importance of cloud monitoring and performance optimization for ensuring a seamless and highly available cloud computing environment.

Keywords: cloud computing, cloud monitoring, performance optimization, high availability

Procedia PDF Downloads 57
3592 Spatial Distribution and Source Identification of Trace Elements in Surface Soil from Izmir Metropolitan Area

Authors: Melik Kara, Gulsah Tulger Kara

Abstract:

The soil is a crucial component of the ecosystem, and in industrial and urban areas it receives large amounts of trace elements from several sources. Therefore, accumulated pollutants in surface soils can be transported to different environmental components, such as deep soil, water, plants, and dust particles. While elemental contamination of soils is caused mainly by atmospheric deposition, soil also affects the air quality since enriched trace elemental contents in atmospheric particulate matter originate from resuspension of polluted soils. The objectives of this study were to determine the total and leachate concentrations of trace elements in soils of city area in Izmir and characterize their spatial distribution and to identify the possible sources of trace elements in surface soils. The surface soil samples were collected from 20 sites. They were analyzed for total element concentrations and leachate concentrations. Analyses of trace elements (Ag, Al, As, B, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Er, Eu, Fe, Ga, Gd, Hf, Ho, K, La, Li, Lu, Mg, Mn, Mo, Na, Nd, Ni, P, Pb, Pr, Rb, Sb, Sc, Se, Si, Sm, Sn, Sr, Tb, Th, Ti, Tl, Tm, U, V, W, Y, Yb, Zn and Zr) were carried out using ICP-MS (Inductively Coupled Plasma-Mass Spectrometer). The elemental concentrations were calculated along with overall median, kurtosis, and skewness statistics. Elemental composition indicated that the soil samples were dominated by crustal elements such as Si, Al, Fe, Ca, K, Mg and the sea salt element, Na which is typical for Aegean region. These elements were followed by Ti, P, Mn, Ba and Sr. On the other hand, Zn, Cr, V, Pb, Cu, and Ni (which are anthropogenic based elements) were measured as 61.6, 39.4, 37.9, 26.9, 22.4, and 19.4 mg/kg dw, respectively. The leachate element concentrations were showed similar sorting although their concentrations were much lower than total concentrations. In the study area, the spatial distribution patterns of elemental concentrations varied among sampling sites. The highest concentrations were measured in the vicinity of industrial areas and main roads. To determine the relationships among elements and to identify the possible sources, PCA (Principal Component Analysis) was applied to the data. The analysis resulted in six factors. The first factor exhibited high loadings of Co, K, Mn, Rb, V, Al, Fe, Ni, Ga, Se, and Cr. This factor could be interpreted as residential heating because of Co, K, Rb, and Se. The second factor associated positively with V, Al, Fe, Na, Ba, Ga, Sr, Ti, Se, and Si. Therefore, this factor presents mixed city dust. The third factor showed high loadings with Fe, Ni, Sb, As, Cr. This factor could be associated with industrial facilities. The fourth factor associated with Cu, Mo, Zn, Sn which are the marker elements of traffic. The fifth factor presents crustal dust, due to its high correlation with Si, Ca, and Mg. The last factor is loaded with Pb and Cd emitted from industrial activities.

Keywords: trace elements, surface soil, source apportionment, Izmir

Procedia PDF Downloads 135