Search results for: efficiency prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8645

Search results for: efficiency prediction

1745 Competing Risks Modeling Using within Node Homogeneity Classification Tree

Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya

Abstract:

To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.

Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree

Procedia PDF Downloads 270
1744 Heat Sink Optimization for a High Power Wearable Thermoelectric Module

Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras

Abstract:

As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.

Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat

Procedia PDF Downloads 150
1743 Genome Editing in Sorghum: Advancements and Future Possibilities: A Review

Authors: Micheale Yifter Weldemichael, Hailay Mehari Gebremedhn, Teklehaimanot Hailesslasie

Abstract:

The advancement of target-specific genome editing tools, including clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein9 (Cas9), mega-nucleases, base editing (BE), prime editing (PE), transcription activator-like endonucleases (TALENs), and zinc-finger nucleases (ZFNs), have paved the way for a modern era of gene editing. CRISPR/Cas9, as a versatile, simple, cost-effective and robust system for genome editing, has dominated the genome manipulation field over the last few years. The application of CRISPR/Cas9 in sorghum improvement is particularly vital in the context of ecological, environmental and agricultural challenges, as well as global climate change. In this context, gene editing using CRISPR/Cas9 can improve nutritional value, yield, resistance to pests and disease and tolerance to different abiotic stress. Moreover, CRISPR/Cas9 can potentially perform complex editing to reshape already available elite varieties and new genetic variations. However, existing research is targeted at improving even further the effectiveness of the CRISPR/Cas9 genome editing techniques to fruitfully edit endogenous sorghum genes. These findings suggest that genome editing is a feasible and successful venture in sorghum. Newer improvements and developments of CRISPR/Cas9 techniques have further qualified researchers to modify extra genes in sorghum with improved efficiency. The fruitful application and development of CRISPR techniques for genome editing in sorghum will not only help in gene discovery, creating new, improved traits in sorghum regulating gene expression sorghum functional genomics, but also in making site-specific integration events.

Keywords: CRISPR/Cas9, genome editing, quality, sorghum, stress, yield

Procedia PDF Downloads 57
1742 Real-Time Optimisation and Minimal Energy Use for Water and Environment Efficient Irrigation

Authors: Kanya L. Khatri, Ashfaque A. Memon, Rod J. Smith, Shamas Bilal

Abstract:

The viability and sustainability of crop production is currently threatened by increasing water scarcity. Water scarcity problems can be addressed through improved water productivity and the options usually presumed in this context are efficient water use and conversion of surface irrigation to pressurized systems. By replacing furrow irrigation with drip or centre pivot systems, the water efficiency can be improved by up to 30 to 45%. However, the installation and application of pumps and pipes, and the associated fuels needed for these alternatives increase energy consumption and cause significant greenhouse gas emissions. Hence, a balance between the improvement in water use and the potential increase in energy consumption is required keeping in view adverse impact of increased carbon emissions on the environment. When surface water is used, pressurized systems increase energy consumption substantially, by between 65% to 75%, and produce greenhouse gas emissions around 1.75 times higher than that of gravity based irrigation. With gravity based surface irrigation methods the energy consumption is assumed to be negligible. This study has shown that a novel real-time infiltration model REIP has enabled implementation of real-time optimization and control of surface irrigation and surface irrigation with real-time optimization has potential to bring significant improvements in irrigation performance along with substantial water savings of 2.92 ML/ha which is almost equivalent to that given by pressurized systems. Thus real-time optimization and control offers a modern, environment friendly and water efficient system with close to zero increase in energy consumption and minimal greenhouse gas emissions.

Keywords: pressurised irrigation, carbon emissions, real-time, environmentally-friendly, REIP

Procedia PDF Downloads 501
1741 Acoustic Emission Techniques in Monitoring Low-Speed Bearing Conditions

Authors: Faisal AlShammari, Abdulmajid Addali, Mosab Alrashed

Abstract:

It is widely acknowledged that bearing failures are the primary reason for breakdowns in rotating machinery. These failures are extremely costly, particularly in terms of lost production. Roller bearings are widely used in industrial machinery and need to be maintained in good condition to ensure the continuing efficiency, effectiveness, and profitability of the production process. The research presented here is an investigation of the use of acoustic emission (AE) to monitor bearing conditions at low speeds. Many machines, particularly large, expensive machines operate at speeds below 100 rpm, and such machines are important to the industry. However, the overwhelming proportion of studies have investigated the use of AE techniques for condition monitoring of higher-speed machines (typically several hundred rpm, or even higher). Few researchers have investigated the application of these techniques to low-speed machines ( < 100 rpm). This paper addressed this omission and has established which, of the available, AE techniques are suitable for the detection of incipient faults and measurement of fault growth in low-speed bearings. The first objective of this paper program was to assess the applicability of AE techniques to monitor low-speed bearings. It was found that the measured statistical parameters successfully monitored bearing conditions at low speeds (10-100 rpm). The second objective was to identify which commonly used statistical parameters derived from the AE signal (RMS, kurtosis, amplitude and counts) could identify the onset of a fault in the out race. It was found that these parameters effectually identify the presence of a small fault seeded into the outer races. Also, it is concluded that rotational speed has a strong influence on the measured AE parameters but that they are entirely independent of the load under such load and speed conditions.

Keywords: acoustic emission, condition monitoring, NDT, statistical analysis

Procedia PDF Downloads 247
1740 Hydrological Response of the Glacierised Catchment: Himalayan Perspective

Authors: Sonu Khanal, Mandira Shrestha

Abstract:

Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.

Keywords: Glacier, Glacier melt, Snowmelt, Energy balance

Procedia PDF Downloads 453
1739 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 157
1738 Prevention of Corruption in Public Purchases

Authors: Anatoly Krivinsh

Abstract:

The results of dissertation research "Preventing and combating corruption in public procurement" are presented in this publication. The study was conducted 2011 till 2013 in a Member State of the European Union, in the Republic of Latvia. Goal of the thesis is to explore corruption prevention and combating issues in public procurement sphere, to identify the prevalence rates, determinants and contributing factors and prevention opportunities in Latvia. In the first chapter the author analyses theoretical aspects of understanding corruption in public procurement, with particular emphasis on corruption definition problem, its nature, causes and consequences. A separate section is dedicated to the public procurement concept, mechanism and legal framework. In the first part of this work the author presents cognitive methodology of corruption in public procurement field, based on which the author has carried out an analysis of corruption situation in public procurement in Republic of Latvia. In the second chapter of the thesis, the author analyzes the problem of corruption in public procurement, including its historical aspects, typology and classification of corruption subjects involved, corruption risk elements in public procurement and their identification. During the development of the second chapter author's practical experience in public procurements was widely used. The third and fourth chapter deals with issues related to the prevention and combating corruption in public procurement, namely the operation of the concept, principles, methods and techniques, subjects in Republic of Latvia, as well as an analysis of foreign experience in preventing and combating corruption. The fifth chapter is devoted to the corruption prevention and combating perspectives and their assessment. In this chapter the author has made the evaluation of corruption prevention and combating measures efficiency in Republic of Latvia, assessment of anti-corruption legislation development stage in public procurement field in Latvia.

Keywords: prevention of corruption, public purchases, good governance, human rights

Procedia PDF Downloads 332
1737 On the Effect of Carbon on the Efficiency of Titanium as a Hydrogen Storage Material

Authors: Ghazi R. Reda Mahmoud Reda

Abstract:

Among the metal that forms hydride´s, Mg and Ti are known as the most lightweight materials; however, they are covered with a passive layer of oxides and hydroxides and require activation treatment under high temperature ( > 300 C ) and hydrogen pressure ( > 3 MPa) before being used for storage and transport applications. It is well known that small graphite addition to Ti or Mg, lead to a dramatic change in the kinetics of mechanically induced hydrogen sorption ( uptake) and significantly stimulate the Ti-Hydrogen interaction. Many explanations were given by different authors to explain the effect of graphite addition on the performance of Ti as material for hydrogen storage. Not only graphite but also the addition of a polycyclic aromatic compound will also improve the hydrogen absorption kinetics. It will be shown that the function of carbon addition is two-fold. First carbon acts as a vacuum cleaner, which scavenges out all the interstitial oxygen that can poison or slow down hydrogen absorption. It is also important to note that oxygen favors the chemisorption of hydrogen, which is not desirable for hydrogen storage. Second, during scavenging of the interstitial oxygen, the carbon reacts with oxygen in the nano and microchannel through a highly exothermic reaction to produce carbon dioxide and monoxide which provide the necessary heat for activation and thus in the presence of carbon lower heat of activation for hydrogen absorption which is observed experimentally. Furthermore, the product of the reaction of hydrogen with the carbon oxide will produce water which due to ball milling hydrolyze to produce the linear H5O2 + this will reconstruct the primary structure of the nanocarbon to form secondary structure, where the primary structure (a sheet of carbon) are connected through hydrogen bonding. It is the space between these sheets where physisorption or defect mediated sorption occurs.

Keywords: metal forming hydrides, polar molecule impurities, titanium, phase diagram, hydrogen absorption

Procedia PDF Downloads 360
1736 The Influence of Training and Competition on Cortisol Levels and Sleep in Elite Female Athletes

Authors: Shannon O’Donnell, Matthew Driller, Gregory Jacobson, Steve Bird

Abstract:

Stress hormone levels in a competition vs. training setting are yet to be evaluated in elite female athletes. The effect that these levels of stress have on subsequent sleep quality and quantity is also yet to be investigated. The aim of the current study was to evaluate different psychophysiological stress markers in competition and training environments and the subsequent effect on sleep indices in an elite female athlete population. The study involved 10 elite female netball athletes (mean ± SD; age = 23 ± 6 yrs) providing multiple salivary hormone measures and having their sleep monitored on two occasions; a match day, and a training day. The training and match were performed at the same time of day and were matched for intensity and duration. Saliva samples were collected immediately pre (5:00 pm) and post session (7:15 pm), and at 10:00 pm and were analysed for cortisol concentrations. Sleep monitoring was performed using wrist actigraphy to assess total sleep time (TST), sleep efficiency (SE%) and sleep latency (SL). Cortisol levels were significantly higher (p < 0.01) immediately post the match vs post training (mean ± SD; 0.925 ± 0.341 μg/dL and 0.239 ± 0.284 μg/dL, respectively) and at 10:00pm (0.143 ± 0.085 μg/dL and 0.072 ± 0.064 μg/dL, respectively, p < 0.01). The difference between trials was associated with a very large effect (ES: 2.23) immediately post (7:15 pm) and a large effect (ES: 1.02) at 10:00 pm. There was a significant reduction in TST (mean ± SD; -117.9 ± 111.9 minutes, p < 0.01, ES: -1.89) and SE% (-7.7 ± 8.5%, p < 0.02, ES: -0.79) on the night following the netball match compared to the training session. Although not significant (p > 0.05), there was an increase in SL following the netball match v the training session (67.0 ± 51.9 minutes and 38.5 ± 29.3 minutes, respectively), which was associated with a moderate effect (ES: 0.80). The current study reports that cortisol levels are significantly higher and subsequent sleep quantity and quality is significantly reduced in elite female athletes following a match compared to a training session.

Keywords: cortisol, netball, performance, recovery

Procedia PDF Downloads 255
1735 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle

Authors: Hu Ding, Kai Liu, Guoan Tang

Abstract:

The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.

Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest

Procedia PDF Downloads 215
1734 Simulation, Optimization, and Analysis Approach of Microgrid Systems

Authors: Saqib Ali

Abstract:

Sources are classified into two depending upon the factor of reviving. These sources, which cannot be revived into their original shape once they are consumed, are considered as nonrenewable energy resources, i.e., (coal, fuel) Moreover, those energy resources which are revivable to the original condition even after being consumed are known as renewable energy resources, i.e., (wind, solar, hydel) Renewable energy is a cost-effective way to generate clean and green electrical energy Now a day’s majority of the countries are paying heed to energy generation from RES Pakistan is mostly relying on conventional energy resources which are mostly nonrenewable in nature coal, fuel is one of the major resources, and with the advent of time their prices are increasing on the other hand RES have great potential in the country with the deployment of RES greater reliability and an effective power system can be obtained In this thesis, a similar concept is being used and a hybrid power system is proposed which is composed of intermixing of renewable and nonrenewable sources The Source side is composed of solar, wind, fuel cells which will be used in an optimal manner to serve load The goal is to provide an economical, reliable, uninterruptable power supply. This is achieved by optimal controller (PI, PD, PID, FOPID) Optimization techniques are applied to the controllers to achieve the desired results. Advanced algorithms (Particle swarm optimization, Flower Pollination Algorithm) will be used to extract the desired output from the controller Detailed comparison in the form of tables and results will be provided, which will highlight the efficiency of the proposed system.

Keywords: distributed generation, demand-side management, hybrid power system, micro grid, renewable energy resources, supply-side management

Procedia PDF Downloads 96
1733 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications

Authors: Aymen Laadhari

Abstract:

The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.

Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell

Procedia PDF Downloads 251
1732 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL

Procedia PDF Downloads 305
1731 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations

Procedia PDF Downloads 174
1730 Microwave Heating and Catalytic Activity of Iron/Carbon Materials for H₂ Production from the Decomposition of Plastic Wastes

Authors: Peng Zhang, Cai Liang

Abstract:

The non-biodegradable plastic wastes have posed severe environmental and ecological contaminations. Numerous technologies, such as pyrolysis, incineration, and landfilling, have already been employed for the treatment of plastic waste. Compared with conventional methods, microwave has displayed unique advantages in the rapid production of hydrogen from plastic wastes. Understanding the interaction between microwave radiation and materials would promote the optimization of several parameters for the microwave reaction system. In this work, various carbon materials have been investigated to reveal microwave heating performance and the ensuing catalytic activity. Results showed that the diversity in the heating characteristic was mainly due to the dielectric properties and the individual microstructures. Furthermore, the gaps and steps among the surface of carbon materials would lead to the distortion of the electromagnetic field, which correspondingly induced plasma discharging. The intensity and location of local plasma were also studied. For high-yield H₂ production, iron nanoparticles were selected as the active sites, and a series of iron/carbon bifunctional catalysts were synthesized. Apart from the high catalytic activity, the iron particles in nano-size close to the microwave skin depth would transfer microwave irradiation to the heat, intensifying the decomposition of plastics. Under microwave radiation, iron is supported on activated carbon material with 10wt.% loading exhibited the best catalytic activity for H₂ production. Specifically, the plastics were rapidly heated up and subsequently converted into H₂ with a hydrogen efficiency of 85%. This work demonstrated a deep understanding of microwave reaction systems and provided the optimization for plastic treatment.

Keywords: plastic waste, recycling, hydrogen, microwave

Procedia PDF Downloads 67
1729 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 616
1728 A Three-Dimensional Investigation of Stabilized Turbulent Diffusion Flames Using Different Type of Fuel

Authors: Moataz Medhat, Essam E. Khalil, Hatem Haridy

Abstract:

In the present study, a numerical simulation study is used to 3-D model the steady-state combustion of a staged natural gas flame in a 300 kW swirl-stabilized burner, using ANSYS solver to find the highest combustion efficiency by changing the inlet air swirl number and burner quarl angle in a furnace and showing the effect of flue gas recirculation, type of fuel and staging. The combustion chamber of the gas turbine is a cylinder of diameter 1006.8 mm, and a height of 1651mm ending with a hood until the exhaust cylinder has been reached, where the exit of combustion products which have a diameter of 300 mm, with a height of 751mm. The model was studied by 15 degree of the circumference due to axisymmetric of the geometry and divided into a mesh of about 1.1 million cells. The numerical simulations were performed by solving the governing equations in a three-dimensional model using realizable K-epsilon equations to express the turbulence and non-premixed flamelet combustion model taking into consideration radiation effect. The validation of the results was done by comparing it with other experimental data to ensure the agreement of the results. The study showed two zones of recirculation. The primary one is at the center of the furnace, and the location of the secondary one varies by changing the quarl angle of the burner. It is found that the increase in temperature in the external recirculation zone is a result of increasing the swirl number of the inlet air stream. Also it was found that recirculating part of the combustion products back to the combustion zone decreases pollutants formation especially nitrogen monoxide.

Keywords: burner selection, natural gas, analysis, recirculation

Procedia PDF Downloads 160
1727 Performance Investigation of Thermal Insulation Materials for Walls: A Case Study in Nicosia (Turkish Republic of North Cyprus)

Authors: L. Vafaei, McDominic Eze

Abstract:

The performance of thermal energy in homes and buildings is a significant factor in terms of energy efficiency of a building. In a large sense, the performance of thermal energy is dependent on many factors of which the amount of thermal insulation is at one end a considerable factor, as likewise the essence of mass and the wall thickness and also the thermal resistance of wall material. This study is aimed at illustrating the different wall system in Turkish Republic of North Cyprus (TRNC), acknowledge the problem and suggest a solution through comparing the effect of thermal radiation two model rooms- L1 (Ytong wall) and L2 (heat insulated wall using stone wool) set up for experimentation. The model room has four face walls. The study consists of two stage, the first test is to access the effect of solar radiation for south facing wall and the second stage is to test the thermal performance of Ytong and heat insulated wall, the effects of climatic condition during winter. The heat insulated wall contains material hollow brick, stone wool, and gypsum while the Ytong wall contains cement concrete, for the outer surface and the inner surface and Ytong stone. The total heat of the wall was determined, 7T-Type thermocouple was used with a data logger system to record the data, temperature change recorded at an interval of 10 minutes. The result obtained was that Ytong wall save more energy than the heat insulated wall at night while heat insulated wall saves energy during the day when intensity is at maximum.

Keywords: heat insulation, hollow bricks, south facing, Ytong bricks wall

Procedia PDF Downloads 265
1726 Improving Sample Analysis and Interpretation Using QIAGENs Latest Investigator STR Multiplex PCR Assays with a Novel Quality Sensor

Authors: Daniel Mueller, Melanie Breitbach, Stefan Cornelius, Sarah Pakulla-Dickel, Margaretha Koenig, Anke Prochnow, Mario Scherer

Abstract:

The European STR standard set (ESS) of loci as well as the new expanded CODIS core loci set as recommended by the CODIS Core Loci Working Group, has led to a higher standardization and harmonization in STR analysis across borders. Various multiplex PCRs assays have since been developed for the analysis of these 17 ESS or 23 CODIS expansion STR markers that all meet high technical demands. However, forensic analysts are often faced with difficult STR results and the questions thereupon. What is the reason that no peaks are visible in the electropherogram? Did the PCR fail? Was the DNA concentration too low? QIAGEN’s newest Investigator STR kits contain a novel Quality Sensor (QS) that acts as internal performance control and gives useful information for evaluating the amplification efficiency of the PCR. QS indicates if the reaction has worked in general and furthermore allows discriminating between the presence of inhibitors or DNA degradation as a cause for the typical ski slope effect observed in STR profiles of such challenging samples. This information can be used to choose the most appropriate rework strategy.Based on the latest PCR chemistry called FRM 2.0, QIAGEN now provides the next technological generation for STR analysis, the Investigator ESSplex SE QS and Investigator 24plex QS Kits. The new PCR chemistry ensures robust and fast PCR amplification with improved inhibitor resistance and easy handling for a manual or automated setup. The short cycling time of 60 min reduces the duration of the total PCR analysis to make a whole workflow analysis in one day more likely. To facilitate the interpretation of STR results a smart primer design was applied for best possible marker distribution, highest concordance rates and a robust gender typing.

Keywords: PCR, QIAGEN, quality sensor, STR

Procedia PDF Downloads 494
1725 Dilution of Saline Irrigation Based on Plant's Physiological Responses to Salt Stress Following by Re-Watering

Authors: Qaiser Javed, Ahmad Azeem

Abstract:

Salinity and water scarcity are major environmental problems which are limiting the agricultural production. This research was conducted to construct a model to find out appropriate regime to dilute saline water based on physiological and electrophysiological properties of Brassica napus L., and Orychophragmus violaceus (L.). Plants were treated under salt-stressed concentrations of NaCl (NL₁: 2.5, NL₂: 5, NL₃: 10; gL⁻¹), Na₂SO₄ (NO₁: 2.5, NO₂: 5, NO₃: 10; gL⁻¹), and mixed salt concentration (MX₁: NL₁+ NO₃; MX₂: NL₃+ NO₁; MX₃: NL₂+ NO₂; gL⁻¹) and 0 as control, followed by re-watering. Growth, physiological and electrophysiology traits were highly restricted under high salt concentration levels at NL₃, NO₃, MX₁, and MX₂, respectively. However, during the rewatering phase, growth, electrophysiological, and physiological parameters were recovered well. Consequently, the increase in net photosynthetic rate was noted under moderate stress condition which was 44.13, 37.07, and 43.01%, respectively in Orychophragmus violaceus (L.) and 44.94%, 53.45%, and 63.04%, respectively were found in Brassica napus L. According to the results, the best dilution point was 5–2.5% for NaCl and Na₂SO₄ alternatively, whereas it was 10–0.0% for the mixture of salts. Therefore, the effect of salinity in O. violaceus and B. napus may also be reduced effectively by dilution of saline irrigation. It would be a better approach to utilize dilute saline water for irrigation instead of applies direct saline water to plant. This study provides new insight in the field of agricultural engineering to plan irrigation scheduling considering the crop ability to salt tolerance and irrigation water use efficiency by apply specific quantity of irrigation calculated based on the salt dilution point. It would be helpful to balance between irrigation amount and optimum crop water consumption in salt-affected regions and to utilize saline water in order to safe freshwater resources.

Keywords: dilution model, plant growth traits, re-watering, salt stress

Procedia PDF Downloads 157
1724 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning

Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah

Abstract:

Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.

Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning

Procedia PDF Downloads 31
1723 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling

Procedia PDF Downloads 144
1722 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 101
1721 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 65
1720 Lithium Ion Supported on TiO2 Mixed Metal Oxides as a Heterogeneous Catalyst for Biodiesel Production from Canola Oil

Authors: Mariam Alsharifi, Hussein Znad, Ming Ang

Abstract:

Considering the environmental issues and the shortage in the conventional fossil fuel sources, biodiesel has gained a promising solution to shift away from fossil based fuel as one of the sustainable and renewable energy. It is synthesized by transesterification of vegetable oils or animal fats with alcohol (methanol or ethanol) in the presence of a catalyst. This study focuses on synthesizing a high efficient Li/TiO2 heterogeneous catalyst for biodiesel production from canola oil. In this work, lithium immobilized onto TiO2 by the simple impregnation method. The catalyst was evaluated by transesterification reaction in a batch reactor under moderate reaction conditions. To study the effect of Li concentrations, a series of LiNO3 concentrations (20, 30, 40 wt. %) at different calcination temperatures (450, 600, 750 ºC) were evaluated. The Li/TiO2 catalysts are characterized by several spectroscopic and analytical techniques such as XRD, FT-IR, BET, TG-DSC and FESEM. The optimum values of impregnated Lithium nitrate on TiO2 and calcination temperature are 30 wt. % and 600 ºC, respectively, along with a high conversion to be 98 %. The XRD study revealed that the insertion of Li improved the catalyst efficiency without any alteration in structure of TiO2 The best performance of the catalyst was achieved when using a methanol to oil ratio of 24:1, 5 wt. % of catalyst loading, at 65◦C reaction temperature for 3 hours of reaction time. Moreover, the experimental kinetic data were compatible with the pseudo-first order model and the activation energy was (39.366) kJ/mol. The synthesized catalyst Li/TiO2 was applied to trans- esterify used cooking oil and exhibited a 91.73% conversion. The prepared catalyst has shown a high catalytic activity to produce biodiesel from fresh and used oil within mild reaction conditions.

Keywords: biodiesel, canola oil, environment, heterogeneous catalyst, impregnation method, renewable energy, transesterification

Procedia PDF Downloads 174
1719 The Agri-Environmental Instruments in Agricultural Policy to Reduce Nitrogen Pollution

Authors: Flavio Gazzani

Abstract:

Nitrogen is an important agricultural input that is critical for the production. However, the introduction of large amounts of nitrogen into the environment has a number of undesirable impacts such as: the loss of biodiversity, eutrophication of waters and soils, drinking water pollution, acidification, greenhouse gas emissions, human health risks. It is a challenge to sustain or increase food production and at the same time reduce losses of reactive nitrogen to the environment, but there are many potential benefits associated with improving nitrogen use efficiency. Reducing nutrient losses from agriculture is crucial to the successful implementation of agricultural policy. Traditional regulatory instruments applied to implement environmental policies to reduce environmental impacts from nitrogen fertilizers, despite some successes, failed to address many environmental challenges and imposed high costs on the society to achieve environmental quality objectives. As a result, economic instruments started to be recognized for their flexibility and cost-effectiveness. The objective of the research project is to analyze the potential for increased use of market-based instruments in nitrogen control policy. The report reviews existing knowledge, bringing different studies together to assess the global nitrogen situation and the most relevant environmental management policy that aims to reduce pollution in a sustainable way without affect negatively agriculture production and food price. This analysis provides some guidance on how different market based instruments might be orchestrated in an overall policy framework to the development and assessment of sustainable nitrogen management from the economics, environmental and food security point of view.

Keywords: nitrogen emissions, chemical fertilizers, eutrophication, non-point of source pollution, dairy farm

Procedia PDF Downloads 328
1718 Finding Optimal Operation Condition in a Biological Nutrient Removal Process with Balancing Effluent Quality, Economic Cost and GHG Emissions

Authors: Seungchul Lee, Minjeong Kim, Iman Janghorban Esfahani, Jeong Tai Kim, ChangKyoo Yoo

Abstract:

It is hard to maintain the effluent quality of the wastewater treatment plants (WWTPs) under with fixed types of operational control because of continuously changed influent flow rate and pollutant load. The aims of this study is development of multi-loop multi-objective control (ML-MOC) strategy in plant-wide scope targeting four objectives: 1) maximization of nutrient removal efficiency, 2) minimization of operational cost, 3) maximization of CH4 production in anaerobic digestion (AD) for CH4 reuse as a heat source and energy source, and 4) minimization of N2O gas emission to cope with global warming. First, benchmark simulation mode is modified to describe N2O dynamic in biological process, namely benchmark simulation model for greenhouse gases (BSM2G). Then, three types of single-loop proportional-integral (PI) controllers for DO controller, NO3 controller, and CH4 controller are implemented. Their optimal set-points of the controllers are found by using multi-objective genetic algorithm (MOGA). Finally, multi loop-MOC in BSM2G is implemented and evaluated in BSM2G. Compared with the reference case, the ML-MOC with the optimal set-points showed best control performances than references with improved performances of 34%, 5% and 79% of effluent quality, CH4 productivity, and N2O emission respectively, with the decrease of 65% in operational cost.

Keywords: Benchmark simulation model for greenhouse gas, multi-loop multi-objective controller, multi-objective genetic algorithm, wastewater treatment plant

Procedia PDF Downloads 502
1717 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications

Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero

Abstract:

Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.

Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry

Procedia PDF Downloads 403
1716 Expansion of Cord Blood Cells Using a Mix of Neurotrophic Factors

Authors: Francisco Dos Santos, Diogo Fonseca-Pereira, Sílvia Arroz-Madeira, Henrique Veiga-Fernandes

Abstract:

Haematopoiesis is a developmental process that generates all blood cell lineages in health and disease. This relies on quiescent haematopoietic stem cells (HSCs) that are able to differentiate, self renew and expand upon physiological demand. HSCs have great interest in regenerative medicine, including haematological malignancies, immunodeficiencies and metabolic disorders. However, the limited yield from existing HSC sources drives the global need for reliable techniques to expand harvested HSCs at high quality and sufficient quantities. With the extensive use of cord blood progenitors for clinical applications, there is a demand for a safe and efficient expansion protocol that is able to overcome the limitations of the cord blood as a source of HSC. StemCell2MAXTM developed a technology that enhances the survival, proliferation and transplantation efficiency of HSC, leading the way to a more widespread use of HSC for research and clinical purposes. StemCell2MAXTM MIX is a solution that improves HSC expansion up to 20x, while preserving stemness, when compared to state-of-the-art. In a recent study by a leading cord blood bank, StemCell2MAX MIX was shown to support a selective 100-fold expansion of CD34+ Hematopoietic Stem and Progenitor Cells (when compared to a 10-fold expansion of Total Nucleated Cells), while maintaining their multipotent differentiative potential as assessed by CFU assays. The technology developed by StemCell2MAXTM opens new horizons for the usage of expanded hematopoietic progenitors for both research purposes (including quality and functional assays in Cord Blood Banks) and clinical applications.

Keywords: cord blood, expansion, hematopoietic stem cell, transplantation

Procedia PDF Downloads 264