Search results for: truncation level method
385 Evaluation of Electro-Flocculation for Biomass Production of Marine Microalgae Phaodactylum tricornutum
Authors: Luciana C. Ramos, Leandro J. Sousa, Antônio Ferreira da Silva, Valéria Gomes Oliveira Falcão, Suzana T. Cunha Lima
Abstract:
The commercial production of biodiesel using microalgae demands a high-energy input for harvesting biomass, making production economically unfeasible. Methods currently used involve mechanical, chemical, and biological procedures. In this work, a flocculation system is presented as a cost and energy effective process to increase biomass production of Phaeodactylum tricornutum. This diatom is the only species of the genus that present fast growth and lipid accumulation ability that are of great interest for biofuel production. The algae, selected from the Bank of Microalgae, Institute of Biology, Federal University of Bahia (Brazil), have been bred in tubular reactor with photoperiod of 12 h (clear/dark), providing luminance of about 35 μmol photons m-2s-1, and temperature of 22 °C. The medium used for growing cells was the Conway medium, with addition of silica. The seaweed growth curve was accompanied by cell count in Neubauer camera and by optical density in spectrophotometer, at 680 nm. The precipitation occurred at the end of the stationary phase of growth, 21 days after inoculation, using two methods: centrifugation at 5000 rpm for 5 min, and electro-flocculation at 19 EPD and 95 W. After precipitation, cells were frozen at -20 °C and, subsequently, lyophilized. Biomass obtained by electro-flocculation was approximately four times greater than the one achieved by centrifugation. The benefits of this method are that no addition of chemical flocculants is necessary and similar cultivation conditions can be used for the biodiesel production and pharmacological purposes. The results may contribute to improve biodiesel production costs using marine microalgae.
Keywords: Biomass, diatom, flocculation, microalgae.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365384 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials
Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic
Abstract:
The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.
Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961383 Nonlinear Finite Element Analysis of Optimally Designed Steel Angelina™ Beams
Authors: Ferhat Erdal, Osman Tunca, Serkan Tas, Serdar Carbas
Abstract:
Web-expanded steel beams provide an easy and economical solution for the systems having longer structural members. The main goal of manufacturing these beams is to increase the moment of inertia and section modulus, which results in greater strength and rigidity. Until recently, there were two common types of open web-expanded beams: with hexagonal openings, also called castellated beams, and beams with circular openings referred to as cellular beams, until the generation of sinusoidal web-expanded beams. In the present research, the optimum design of a new generation beams, namely sinusoidal web-expanded beams, will be carried out and the design results will be compared with castellated and cellular beam solutions. Thanks to a reduced fabrication process and substantial material savings, the web-expanded beam with sinusoidal holes (Angelina™ Beam) meets the economic requirements of steel design problems while ensuring optimum safety. The objective of this research is to carry out non-linear finite element analysis (FEA) of the web-expanded beam with sinusoidal holes. The FE method has been used to predict their entire response to increasing values of external loading until they lose their load carrying capacity. FE model of each specimen that is utilized in the experimental studies is carried out. These models are used to simulate the experimental work to verify of test results and to investigate the non-linear behavior of failure modes such as web-post buckling, shear buckling and vierendeel bending of beams.Keywords: Steel structures, web-expanded beams, Angelina™ beam, optimum design, failure modes, finite element analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493382 Preventive Interventions for Central Venous Catheter Infections in Intensive Care Units: A Systematic Literature Review
Authors: Jakob Renko, Deja Praprotnik, Kristina Martinovič, Igor Karnjuš
Abstract:
Catheter-related bloodstream infections are a major burden for healthcare and patients. Although infections of this type cannot be completely avoided, they can be reduced by taking preventive measures. The aim of this study is to review and analyze the existing literature on preventive interventions to prevent central venous catheters (CVC) infections. A systematic literature review was carried out. The international databases CINAHL, Medline, PubMed, and Web of Science were searched using the search strategy: "catheter-related infections" AND "intensive care units" AND "prevention" AND "central venous catheter." Articles that met the inclusion and exclusion criteria were included in the study. The literature search flow is illustrated by the PRISMA diagram. The descriptive research method was used to analyze the data. Out of 554 search results, 22 surveys were included in the final analysis. We identified seven relevant preventive measures to prevent CVC infections: washing the whole body with chlorhexidine gluconate (CHG) solution, disinfecting the CVC entry site with CHG solution, use of CHG or silver dressings, alcohol protective caps, CVC care education, selecting appropriate catheter and multicomponent care bundles. Both single interventions and multicomponent care bundles have been shown to be currently effective measures to prevent CVC infections in adult patients in the ICU. None of the measures identified stood out in terms of their effectiveness. Prevention work to reduce CVC infections in the ICU is a complex process that requires the simultaneous consideration of several factors.
Keywords: Central venous access, critically ill patients, hospital-acquired complications, prevention.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 260381 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: Data mining, ensemble, radial basis function, support vector machine, accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700380 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia
Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak
Abstract:
In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.Keywords: Data security, flow cytometry, leukaemia, telematics platform, telemedicine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568379 The Impact of Digital Inclusive Finance on the High-Quality Development of China's Export Trade
Authors: Yao Wu
Abstract:
In the context of financial globalization, China has put forward the policy goal of high-quality development, and the digital economy, with its advantage of information resources, is driving China's export trade to achieve high-quality development. Due to the long-standing financing constraints of small and medium-sized export enterprises, how to expand the export scale of small and medium-sized enterprises has become a major threshold for the development of China's export trade. This paper firstly adopts the hierarchical analysis method to establish the evaluation system of high-quality development of China's export trade; secondly, the panel data of 30 provinces in China from 2011 to 2018 are selected for empirical analysis to establish the impact model of digital inclusive finance on the high-quality development of China's export trade; based on the analysis of the heterogeneous enterprise trade model, a mediating effect model is established to verify the mediating role of credit constraint in the development of high-quality export trade in China. Based on the above analysis, this paper concludes that inclusive digital finance, with its unique digital and inclusive nature, alleviates the credit constraint problem among SMEs, enhances the binary marginal effect of SMEs' exports, optimizes their export scale and structure, and promotes the high-quality development of regional and even national export trade. Finally, based on the findings of this paper, we propose insights and suggestions for inclusive digital finance to promote the high-quality development of export trade.
Keywords: Digital inclusive finance, high-quality development of export trade, fixed effects, binary marginal effects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 704378 Disparities versus Similarities: WHO GPPQCL and ISO/IEC 17025:2017 International Standards for Quality Management Systems in Pharmaceutical Laboratories
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn, P. Shivanand
Abstract:
Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.
Keywords: ISO/IEC 17025:2017, laboratory standards, quality control, WHO GPPQCL
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121377 Improvement of Frictional Coefficient of Modified Shoe Soles onto Icy and Snowy Road by Tilting of Added Glass Fibers into Rubber
Authors: Shunya Wakayama, Kazuya Okubo, Toru Fujii, Daisuke Sakata, Noriyuki Kado, Hiroshi Furutachi
Abstract:
The purpose of this study is to propose an effective method to improve frictional coefficient between shoe rubber soles with added glass fibers and the surfaces of icy and snowy road in order to prevent slip-and-fall accidents by the users. The additional fibers into the rubber were uniformly tilted to the perpendicular direction of the frictional surface, where tilting angles were -60, -30, +30, +60, 90 degrees and 0 (as normal specimen), respectively. It was found that parallel arraignment was effective to improve the frictional coefficient when glass fibers were embedded in the shoe rubber, while perpendicular to normal direction of the embedded glass fibers on the shoe surface was also effective to do that once after they were exposed from the shoe rubber with its abrasion. These improvements were explained by the increase of stiffness against the shear deformation of the rubber at critical frictional state and adequate scratching of fibers when fibers were protruded in perpendicular to frictional direction, respectively. Most effective angle of tilting of frictional coefficient between rubber specimens and a stone was perpendicular (= 0 degree) to frictional direction. Combinative modified rubber specimen having 2 layers was fabricated where tilting angle of protruded fibers was 0 degree near the contact surface and tilting angle of embedded fibers was 90 degrees near back surface in thickness direction to further improve the frictional coefficient. Current study suggested that effective arraignments in tilting angle of the added fibers should be applied in designing rubber shoe soles to keep the safeties for users in regions of cold climates.Keywords: Frictional coefficient, icy and snowy road, shoe rubber soles, tilting angle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701376 Determination of the Thermophysical Characteristics of the Composite Material Clay Cement Paper
Authors: A. Ouargui, N. Belouaggadia, M. Ezzine
Abstract:
In Morocco, the building sector is largely responsible for the evolution of energy consumption. The control of energy in this sector remains a major issue despite the rise of renewable energies. The design of an environmentally friendly building requires mastery and knowledge of energy and bioclimatic aspects. This implies taking into consideration of all the elements making up the building and the way in which energy exchanges take place between these elements. In this context, thermal insulation seems to be an ideal starting point for reducing energy consumption and greenhouse gas emissions. In this context, thermal insulation seems to be an ideal starting point for reducing energy consumption and greenhouse gas emissions. The aim of this work is to provide some solutions to reduce energy consumption while maintaining thermal comfort in the building. The objective of our work is to present an experimental study on the characterization of local materials used in the thermal insulation of buildings. These are paper recycling stabilized with cement and clay. The thermal conductivity of these materials, which were constituted based on sand, clay, cement; water, as well as treated paper, was determined by the guarded-hot-plate method. It involves the design of two materials that will subsequently be subjected to thermal and mechanical tests to determine their thermophysical properties. The results show that the thermal conductivity decreases as well in the case of the paper-cement mixture as that of the paper-clay and seems to stabilize around 40%. Measurements of mechanical properties such as flexural strength have shown that the enrichment of the studied material with paper makes it possible to reduce the flexural strength by 20% while optimizing the conductivity.
Keywords: Building, composite material, insulation, thermal conductivity, paper residue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 635375 A CFD Study of Turbulent Convective Heat Transfer Enhancement in Circular Pipeflow
Authors: Perumal Kumar, Rajamohan Ganesan
Abstract:
Addition of milli or micro sized particles to the heat transfer fluid is one of the many techniques employed for improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. Nanoparticles also increase the viscosity of the basefluid resulting in higher pressure drop for the nanofluid compared to the base fluid. So it is imperative that the Reynolds number (Re) and the volume fraction have to be optimum for better thermal hydraulic effectiveness. In this work, the heat transfer enhancement using aluminium oxide nanofluid using low and high volume fraction nanofluids in turbulent pipe flow with constant wall temperature has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach. Nanofluid, up till a volume fraction of 1% is found to be an effective heat transfer enhancement technique. The Nusselt number (Nu) and friction factor predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%) agree very well with the experimental values of Sundar and Sharma (2010). While, predictions for the high volume fraction nanofluids (i.e. 1%, 4% and 6%) are found to have reasonable agreement with both experimental and numerical results available in the literature. So the computationally inexpensive single phase approach can be used for heat transfer and pressure drop prediction of new nanofluids.Keywords: Heat transfer intensification, nanofluid, CFD, friction factor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2875374 Performances Assessment of Direct Torque Controlled IM Drives Using Fuzzy Logic Control and Space Vector Modulation Strategy
Authors: L. Moussaoui, L. Rahmani
Abstract:
This paper deals with the direct torque control (DTC) of the induction motor. This type of control allows decoupling control between the flux and the torque without the need for a transformation of coordinates. However, as with other hysteresis-based systems, the classical DTC scheme represents a high ripple, in both the electromagnetic torque and the stator flux and a distortion in the stator current. As well, it suffers from variable switching frequency. To solve these problems various modifications, in conventional DTC scheme, have been made during the last decade. Indeed the DTC based on space vector modulation (SVM) has proved to generate very low ripples in torque and flux with constant switching frequency. It also shows almost the same dynamic performances as the classical DTC system. On the other hand, fuzzy logic is considered as an interesting alternative approach for its advantages: Analysis close to the exigencies of user, ability of nonlinear systems control, best dynamic performances and inherent quality of robustness.
Therefore, two fuzzy direct torque control approaches, for the induction motor fed by SVM-voltage source inverter, are proposed in this paper. By using these two approaches of DTC, the advantages of fuzzy logic control, space vector modulation, and direct torque control method are combined. The performances of these DTC schemes are evaluated through digital simulation using Matlab/Simulink platform and fuzzy logic tools. Simulation results illustrate the effectiveness and the superiority of the proposed Fuzzy DTC-SVM schemes in comparison to the classical DTC.
Keywords: Direct torque control, Fuzzy logic control, Induction motor, Switching frequency, Space vector modulation, Torque and flux ripples.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2398373 Novel Adaptive Channel Equalization Algorithms by Statistical Sampling
Authors: János Levendovszky, András Oláh
Abstract:
In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.
Keywords: Cellular Neural Network, channel equalization, communication over fading channels, multiuser communication, spectral efficiency, statistical sampling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520372 Combined Effect of Heat Stimulation and Delayed Addition of Superplasticizer with Slag on Fresh and Hardened Property of Mortar
Authors: Faraidoon Rahmanzai, Mizuki Takigawa, Yu Bomura, Shigeyuki Date
Abstract:
To obtain the high quality and essential workability of mortar, different types of superplasticizers are used. The superplasticizers are the chemical admixture used in the mix to improve the fluidity of mortar. Many factors influenced the superplasticizer to disperse the cement particle in the mortar. Nature and amount of replaced cement by slag, mixing procedure, delayed addition time, and heat stimulation technique of superplasticizer cause the varied effect on the fluidity of the cementitious material. In this experiment, the superplasticizers were heated for 1 hour under 60 °C in a thermostatic chamber. Furthermore, the effect of delayed addition time of heat stimulated superplasticizers (SP) was also analyzed. This method was applied to two types of polycarboxylic acid based ether SP (precast type superplasticizer (SP2) and ready-mix type superplasticizer (SP1)) in combination with a partial replacement of normal Portland cement with blast furnace slag (BFS) with 30% w/c ratio. On the other hands, the fluidity, air content, fresh density, and compressive strength for 7 and 28 days were studied. The results indicate that the addition time and heat stimulation technique improved the flow and air content, decreased the density, and slightly decreased the compressive strength of mortar. Moreover, the slag improved the flow of mortar by increasing the amount of slag, and the effect of external temperature of SP on the flow of mortar was decreased. In comparison, the flow of mortar was improved on 5-minute delay for both kinds of SP, but SP1 has improved the flow in all conditions. Most importantly, the transition points in both types of SP appear to be the same, at about 5±1 min. In addition, the optimum addition time of SP to mortar should be in this period.
Keywords: Combined effect, delayed addition, heat stimulation, flow of mortar.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 847371 Feasibility Study of Mine Tailing’s Treatment by Acidithiobacillus thiooxidans DSM 26636
Authors: M. Gómez-Ramírez, A. Rivas-Castillo, I. Rodríguez-Pozos, R. A. Avalos-Zuñiga, N. G. Rojas-Avelizapa
Abstract:
Among the diverse types of pollutants produced by anthropogenic activities, metals represent a serious threat, due to their accumulation in ecosystems and their elevated toxicity. The mine tailings of abandoned mines contain high levels of metals such as arsenic (As), zinc (Zn), copper (Cu), and lead (Pb), which do not suffer any degradation process, they are accumulated in environment. Abandoned mine tailings potentially could contaminate rivers and aquifers representing a risk for human health due to their high metal content. In an attempt to remove the metals and thereby mitigate the environmental pollution, an environmentally friendly and economical method of bioremediation has been introduced. Bioleaching has been actively studied over the last several years, and it is one of the bioremediation solutions used to treat heavy metals contained in sewage sludge, sediment and contaminated soil. Acidithiobacillus thiooxidans, an extremely acidophilic, chemolithoautotrophic, gram-negative, rod shaped microorganism, which is typically related to Cu mining operations (bioleaching), has been well studied for industrial applications. The sulfuric acid produced plays a major role in bioleaching. Specifically, Acidithiobacillus thiooxidans strain DSM 26636 has been able to leach Al, Ni, V, Fe, Mg, Si, and Ni contained in slags from coal combustion wastes. The present study reports the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in two different mine tailing samples (MT1 and MT2). It was observed that Al, Fe, and Mn were removed in 36.3±1.7, 191.2±1.6, and 4.5±0.2 mg/kg for MT1, and in 74.5±0.3, 208.3±0.5, and 20.9±0.1 for MT2. Besides, < 1.5 mg/kg of Au and Ru were also bioleached from MT1; in MT2, bioleaching of Zn was observed at 55.7±1.3 mg/kg, besides removal of < 1.5 mg/kg was observed for As, Ir, Li, and 0.6 for Os in this residue. These results show the potential of strain DSM 26636 for the bioleaching of metals that came from different mine tailings.
Keywords: A. thiooxidans, bioleaching, metals, mine tailings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987370 Identification of Factors Influencing Company's Competitiveness
Authors: D. Ščeulovs, E. Gaile-Sarkane
Abstract:
Fast development of technologies, economic globalization and many other external circumstances stimulate company’s competitiveness. One of the major trends in today’s business is the shift to the exploitation of the Internet and electronic environment for entrepreneurial needs. Latest researches confirm that e-environment provides a range of possibilities and opportunities for companies, especially for micro-, small- and medium-sized companies, which have limited resources. The usage of e-tools raises the effectiveness and the profitability of an organization, as well as its competitiveness. In the electronic market, as in the classic one, there are factors, such as globalization, development of new technology, price sensitive consumers, Internet, new distribution and communication channels that influence entrepreneurship. As a result of eenvironment development, e-commerce and e-marketing grow as well.
Objective of the paper: To describe and identify factors influencing company’s competitiveness in e-environment.
Research methodology: The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistics method, factor analysis in SPSS 20 environment, etc. The theoretical and methodological background of the research is formed by using scientific researches and publications, such as that from mass media and professional literature; statistical information from legal institutions as well as information collected by the authors during the surveying process. Research result: The authors detected and classified factors influencing competitiveness in e-environment.
In this paper, the authors presented their findings based on theoretical, scientific, and field research. Authors have conducted a research on e-environment utilization among Latvian enterprises.
Keywords: Competitiveness, e-environment, factors, factor analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096369 Displacement Solution for a Static Vertical Rigid Movement of an Interior Circular Disc in a Transversely Isotropic Tri-Material Full-Space
Authors: D. Mehdizadeh, M. Rahimian, M. Eskandari-Ghadi
Abstract:
This article is concerned with the determination of the static interaction of a vertically loaded rigid circular disc embedded at the interface of a horizontal layer sandwiched in between two different transversely isotropic half-spaces called as tri-material full-space. The axes of symmetry of different regions are assumed to be normal to the horizontal interfaces and parallel to the movement direction. With the use of a potential function method, and by implementing Hankel integral transforms in the radial direction, the government partial differential equation for the solely scalar potential function is transformed to an ordinary 4th order differential equation, and the mixed boundary conditions are transformed into a pair of integral equations called dual integral equations, which can be reduced to a Fredholm integral equation of the second kind, which is solved analytically. Then, the displacements and stresses are given in the form of improper line integrals, which is due to inverse Hankel integral transforms. It is shown that the present solutions are in exact agreement with the existing solutions for a homogeneous full-space with transversely isotropic material. To confirm the accuracy of the numerical evaluation of the integrals involved, the numerical results are compared with the solutions exists for the homogeneous full-space. Then, some different cases with different degrees of material anisotropy are compared to portray the effect of degree of anisotropy.
Keywords: Transversely isotropic, rigid disc, elasticity, dual integral equations, tri-material full-space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675368 Efficacy of Biofeedback-Assisted Pelvic Floor Muscle Training on Postoperative Stress Urinary Incontinence
Authors: Asmaa M. El-Bandrawy, Afaf M. Botla, Ghada E. El-Refaye, Hassan O. Ghareeb
Abstract:
Background: Urinary incontinence is a common problem among adults. Its incidence increases with age and it is more frequent in women. Pelvic floor muscle training (PFMT) is the first-line therapy in the treatment of pelvic floor dysfunction (PFD) either alone or combined with biofeedback-assisted PFMT. The aim of the work: The purpose of this study is to evaluate the efficacy of biofeedback-assisted PFMT in postoperative stress urinary incontinence. Settings and Design: A single blind controlled trial design was. Methods and Material: This study was carried out in 30 volunteer patients diagnosed as severe degree of stress urinary incontinence and they were admitted to surgical treatment. They were divided randomly into two equal groups: (Group A) consisted of 15 patients who had been treated with post-operative biofeedback-assisted PFMT and home exercise program (Group B) consisted of 15 patients who had been treated with home exercise program only. Assessment of all patients in both groups (A) and (B) was carried out before and after the treatment program by measuring intra-vaginal pressure in addition to the visual analog scale. Results: At the end of the treatment program, there was a highly statistically significant difference between group (A) and group (B) in the intra-vaginal pressure and the visual analog scale favoring the group (A). Conclusion: biofeedback-assisted PFMT is an effective method for the symptomatic relief of post-operative female stress urinary incontinence.
Keywords: Stress urinary incontinence, pelvic floor muscles, pelvic floor exercises, biofeedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419367 A Model to Determine Atmospheric Stability and its Correlation with CO Concentration
Authors: Kh. Ashrafi, Gh. A. Hoshyaripour
Abstract:
Atmospheric stability plays the most important role in the transport and dispersion of air pollutants. Different methods are used for stability determination with varying degrees of complexity. Most of these methods are based on the relative magnitude of convective and mechanical turbulence in atmospheric motions. Richardson number, Monin-Obukhov length, Pasquill-Gifford stability classification and Pasquill–Turner stability classification, are the most common parameters and methods. The Pasquill–Turner Method (PTM), which is employed in this study, makes use of observations of wind speed, insolation and the time of day to classify atmospheric stability with distinguishable indices. In this study, a model is presented to determination of atmospheric stability conditions using PTM. As a case study, meteorological data of Mehrabad station in Tehran from 2000 to 2005 is applied to model. Here, three different categories are considered to deduce the pattern of stability conditions. First, the total pattern of stability classification is obtained and results show that atmosphere is 38.77%, 27.26%, 33.97%, at stable, neutral and unstable condition, respectively. It is also observed that days are mostly unstable (66.50%) while nights are mostly stable (72.55%). Second, monthly and seasonal patterns are derived and results indicate that relative frequency of stable conditions decrease during January to June and increase during June to December, while results for unstable conditions are exactly in opposite manner. Autumn is the most stable season with relative frequency of 50.69% for stable condition, whilst, it is 42.79%, 34.38% and 27.08% for winter, summer and spring, respectively. Hourly stability pattern is the third category that points out that unstable condition is dominant from approximately 03-15 GTM and 04-12 GTM for warm and cold seasons, respectively. Finally, correlation between atmospheric stability and CO concentration is achieved.Keywords: Atmospheric stability, Pasquill-Turner classification, convective turbulence, mechanical turbulence, Tehran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6453366 Investigation of Possible Behavioural and Molecular Effects of Mobile Phone Exposure on Rats
Authors: Ç. Gökçek-Saraç, Ş. Özen, N. Derin
Abstract:
The N-methyl-D-aspartate (NMDA)-dependent pathway is the major intracellular signaling pathway implemented in both short- and long-term memory formation in the hippocampus which is the most studied brain structure because of its well documented role in learning and memory. However, little is known about the effects of RF-EMR exposure on NMDA receptor signaling pathway including activation of protein kinases, notably Ca2+/calmodulin-dependent protein kinase II alpha (CaMKIIα). The aim of the present study was to investigate the effects of acute and chronic 900 MHz RF-EMR exposure on both passive avoidance behaviour and hippocampal levels of CaMKIIα and its phosphorylated form (pCaMKIIα). Rats were divided into the following groups: Sham rats, and rats exposed to 900 MHz RF-EMR for 2 h/day for 1 week (acute group) or 10 weeks (chronic group), respectively. Passive avoidance task was used as a behavioural method. The hippocampal levels of selected kinases were measured using Western Blotting technique. The results of passive avoidance task showed that both acute and chronic exposure to 900 MHz RF-EMR can impair passive avoidance behaviour with minor effects on chronic group of rats. The analysis of western blot data of selected protein kinases demonstrated that hippocampal levels of CaMKIIα and pCaMKIIα were significantly higher in chronic group of rats as compared to acute groups. Taken together, these findings demonstrated that different duration times (1 week vs 10 weeks) of 900 MHz RF-EMR exposure have different effects on both passive avoidance behaviour of rats and hippocampal levels of selected protein kinases.
Keywords: Hippocampus, protein kinase, rat, RF-EMR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 869365 Acceleration-Based Motion Model for Visual SLAM
Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.
Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 251364 Physical-Mechanical Characteristics of Monocrystalline Si1-xGex (x≤0,02) Solid Solutions
Authors: I. Kurashvili, A. Sichinava, G. Bokuchava, G. Darsavelidze
Abstract:
Si-Ge solid solutions (bulk poly- and mono-crystalline samples, thin films) are characterized by high perspectives for application in semiconductor devices, in particular, optoelectronics and microelectronics. From this point of view, complex studying of structural state of the defects and structural-sensitive physical properties of Si-Ge solid solutions depending on the contents of Si and Ge components is very important. Present work deals with the investigations of microstructure, microhardness, internal friction and shear modulus of Si1-xGex(x≤0,02) bulk monocrystals conducted at room temperature. Si-Ge bulk crystals were obtained by Czochralski method in [111] crystallographic direction. Investigated monocrystalline Si-Ge samples are characterized by p-type conductivity and carriers’ concentration 5.1014-1.1015cm-3. Microhardness was studied on Dynamic Ultra Micro hardness Tester DUH-201S with Berkovich indenter. Investigate samples are characterized with 0,5x0,5x(10-15)mm3 sizes, oriented along [111] direction at torsion oscillations ≈1Hz, multistage changing of internal friction and shear modulus has been revealed in an interval of strain amplitude of 10-5-5.10-3. Critical values of strain amplitude have been determined at which hysteretic changes of inelastic characteristics and microplasticity are observed. The critical strain amplitude and elasticity limit values are also determined. Dynamic mechanical characteristics decreasing trend is shown with increasing Ge content in Si-Ge solid solutions. Observed changes are discussed from the point of view of interaction of various dislocations with point defects and their complexes in a real structure of Si-Ge solid solutions.Keywords: Internal friction, microhardness, relaxation processes, shear modulus, Si-Ge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567363 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety critical incident to raise awareness of biases in systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the Methodology used to model and analyse the safety-critical incident. The SIRI Methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the Management Oversight and Risk Tree technique. The benefits of the SIRI Methodology are threefold: first is that it incorporates “Heuristics and Biases” approach, in the Management Oversight and Risk Tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling technique. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organisational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signalling firms and transport planners, and front-line staff such that lessons learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner’s and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision making and risk management processes and practices in the IEC 15288 Systems Engineering standard, and in the industrial context such as the GB railways and Artificial Intelligence (AI) contexts as well.
Keywords: Accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 379362 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya
Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson
Abstract:
The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.
Keywords: Coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 959361 Real-Time Land Use and Land Information System in Homagama Divisional Secretariat Division
Authors: Kumara Jayapathma J. H. M. S. S., Dampegama S. D. P. J.
Abstract:
Lands are valuable & limited resource which constantly changes with the growth of the population. An efficient and good land management system is essential to avoid conflicts associated with lands. This paper aims to design the prototype model of a Mobile GIS Land use and Land Information System in real-time. Homagama Divisional Secretariat Division situated in the western province of Sri Lanka was selected as the study area. The prototype model was developed after reviewing related literature. The methodology was consisted of designing and modeling the prototype model into an application running on a mobile platform. The system architecture mainly consists of a Google mapping app for real-time updates with firebase support tools. Thereby, the method of implementation consists of front-end and back-end components. Software tools used in designing applications are Android Studio with JAVA based on GeoJSON File structure. Android Studio with JAVA in GeoJSON File Synchronize to Firebase was found to be the perfect mobile solution for continuously updating Land use and Land Information System (LIS) in real-time in the present scenario. The mobile-based land use and LIS developed in this study are multiple user applications catering to different hierarchy levels such as basic users, supervisory managers, and database administrators. The benefits of this mobile mapping application will help public sector field officers with non-GIS expertise to overcome the land use planning challenges with land use updated in real-time.
Keywords: Android, Firebase, GeoJSON, GIS, JAVA, JSON, LIS, mobile GIS, real-time, REST API.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565360 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196
Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek
Abstract:
It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991359 The effect of Gamma Irradiation on the Nutritional Properties of Functional Products of the Green Banana
Authors: Magda S. Taipina, Maria L. Garbelotti, Mariana G.B. Cadioli
Abstract:
Banana is one of the most consumed fruits in the tropics and subtropics. Brazil accounts for about 9% of the world banana production. However, the production losses are as high as 30 to 40% and even much higher in some developing countries. The green banana flour is a complex carbohydrate source, including a high total starch (73.4%), resistant starch (17.5%) with functional properties. Gamma irradiation is considered to be an alternative method for food preservation. It has been performed due to the need of extending the shelf - life of foods, whilst maintaining their safety and avoiding one of the main concerns: the nutrient loss. In this work data about on the effects of ionizing radiation on the physicochemical analysis (carbohydrate, proteins, lipids, alimentary fiber, moistures and ashes) of Brazilian functional products (biscuits and bread) of the green banana pulp are presented. The caloric value was calculated. No significant difference was observed between the samples of irradiated and non – irradiated green banana biscuits with the following determinations: carbohydrates, proteins, alimentary fiber and ashes. Only a small significant difference was found in lipids (macronutrients). The results of physical chemical analysis of the irradiated and non- irradiated green banana bread non- irradiated showed no significant difference with the following determinations: carbohydrates, lipids (macronutrients), moisture, ashes and caloric value. A small difference was found in proteins (macronutrients). Irradiation of functional products (biscuits and bread) with doses of 1 and 3kGy maintained their original macronutrients content, showing good radioresistance.
Keywords: Irradiation, Functional Food, Nutritional value.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668358 Three Computational Mathematics Techniques: Comparative Determination of Area under Curve
Authors: Khalid Pervaiz Akhter, Mahmood Ahmad, Ghulam Murtaza, Ishrat Shafi, Zafar Javed
Abstract:
The objective of this manuscript is to find area under the plasma concentration- time curve (AUC) for multiple doses of salbutamol sulphate sustained release tablets (Ventolin® oral tablets SR 8 mg, GSK, Pakistan) in the group of 18 healthy adults by using computational mathematics techniques. Following the administration of 4 doses of Ventolin® tablets 12 hourly to 24 healthy human subjects and bioanalysis of obtained plasma samples, plasma drug concentration-time profile was constructed. AUC, an important pharmacokinetic parameter, was measured using integrated equation of multiple oral dose regimens. The approximated AUC was also calculated by using computational mathematics techniques such as repeated rectangular, repeated trapezium and repeated Simpson's rule and compared with exact value of AUC calculated by using integrated equation of multiple oral dose regimens to find best computational mathematics method that gives AUC values closest to exact. The exact values of AUC for four consecutive doses of Ventolin® oral tablets were 150.5819473, 157.8131756, 164.4178231 and 162.78 ng.h/ml while the closest values approximated AUC values were 149.245962, 157.336171, 164.2585768 and 162.289224 ng.h/ml, respectively as found by repeated rectangular rule. The errors in the approximated values of AUC were negligible. It is concluded that all computational tools approximated values of AUC accurately but the repeated rectangular rule gives slightly better approximated values of AUC as compared to repeated trapezium and repeated Simpson's rules.
Keywords: Salbutamol sulphate, Area under curve (AUC), repeated rectangular rule, repeated trapezium rule, repeated Simpson's rule.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842357 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994356 Multilayer Thermal Screens for Greenhouse Insulation
Authors: Clara Shenderey, Helena Vitoshkin, Mordechai Barak, Avraham Arbel
Abstract:
Greenhouse cultivation is an energy-intensive process due to the high demands on cooling or heating according to external climatic conditions, which could be extreme in the summer or winter seasons. The thermal radiation rate inside a greenhouse depends mainly on the type of covering material and greenhouse construction. Using additional thermal screens under a greenhouse covering combined with a dehumidification system improves the insulation and could be cost-effective. Greenhouse covering material usually contains protective ultraviolet (UV) radiation additives to prevent the film wear, insect harm, and crop diseases. This paper investigates the overall heat transfer coefficient, or U-value, for greenhouse polyethylene covering contains UV-additives and glass covering with or without a thermal screen supplement. The hot-box method was employed to evaluate overall heat transfer coefficients experimentally as a function of the type and number of the thermal screens. The results show that the overall heat transfer coefficient decreases with increasing the number of thermal screens as a hyperbolic function. The overall heat transfer coefficient highly depends on the ability of the material to reflect thermal radiation. Using a greenhouse covering, i.e., polyethylene films or glass, in combination with high reflective thermal screens, i.e., containing about 98% of aluminum stripes or aluminum foil, the U-value reduces by 61%-89% in the first case, whereas by 70%-92% in the second case, depending on the number of the thermal screen. Using thermal screens made from low reflective materials may reduce the U-value by 30%-57%. The heat transfer coefficient is an indicator of the thermal insulation properties of the materials, which allows farmers to make decisions on the use of appropriate thermal screens depending on the external and internal climate conditions in a greenhouse.
Keywords: Energy-saving thermal screen, greenhouse covering material, heat transfer coefficient, hot box.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623