Search results for: quantum optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3794

Search results for: quantum optimization

44 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems

Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair

Abstract:

Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.

Keywords: breeding blanket, corrosion protection, coating, plasma spray

Procedia PDF Downloads 307
43 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 92
42 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage

Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti

Abstract:

Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.

Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage

Procedia PDF Downloads 160
41 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 177
40 Predicting and Obtaining New Solvates of Curcumin, Demethoxycurcumin and Bisdemethoxycurcumin Based on the Ccdc Statistical Tools and Hansen Solubility Parameters

Authors: J. Ticona Chambi, E. A. De Almeida, C. A. Andrade Raymundo Gaiotto, A. M. Do Espírito Santo, L. Infantes, S. L. Cuffini

Abstract:

The solubility of active pharmaceutical ingredients (APIs) is challenging for the pharmaceutical industry. The new multicomponent crystalline forms as cocrystal and solvates present an opportunity to improve the solubility of APIs. Commonly, the procedure to obtain multicomponent crystalline forms of a drug starts by screening the drug molecule with the different coformers/solvents. However, it is necessary to develop methods to obtain multicomponent forms in an efficient way and with the least possible environmental impact. The Hansen Solubility Parameters (HSPs) is considered a tool to obtain theoretical knowledge of the solubility of the target compound in the chosen solvent. H-Bond Propensity (HBP), Molecular Complementarity (MC), Coordination Values (CV) are tools used for statistical prediction of cocrystals developed by the Cambridge Crystallographic Data Center (CCDC). The HSPs and the CCDC tools are based on inter- and intra-molecular interactions. The curcumin (Cur), target molecule, is commonly used as an anti‐inflammatory. The demethoxycurcumin (Demcur) and bisdemethoxycurcumin (Bisdcur) are natural analogues of Cur from turmeric. Those target molecules have differences in their solubilities. In this way, the work aimed to analyze and compare different tools for multicomponent forms prediction (solvates) of Cur, Demcur and Biscur. The HSP values were calculated for Cur, Demcur, and Biscur using the chemical group contribution methods and the statistical optimization from experimental data. The HSPmol software was used. From the HSPs of the target molecules and fifty solvents (listed in the HSP books), the relative energy difference (RED) was determined. The probability of the target molecules would be interacting with the solvent molecule was determined using the CCDC tools. A dataset of fifty molecules of different organic solvents was ranked for each prediction method and by a consensus ranking of different combinations: HSP, CV, HBP and MC values. Based on the prediction, 15 solvents were selected as Dimethyl Sulfoxide (DMSO), Tetrahydrofuran (THF), Acetonitrile (ACN), 1,4-Dioxane (DOX) and others. In a starting analysis, the slow evaporation technique from 50°C at room temperature and 4°C was used to obtain solvates. The single crystals were collected by using a Bruker D8 Venture diffractometer, detector Photon100. The data processing and crystal structure determination were performed using APEX3 and Olex2-1.5 software. According to the results, the HSPs (theoretical and optimized) and the Hansen solubility sphere for Cur, Demcur and Biscur were obtained. With respect to prediction analyses, a way to evaluate the predicting method was through the ranking and the consensus ranking position of solvates already reported in the literature. It was observed that the combination of HSP-CV obtained the best results when compared to the other methods. Furthermore, as a result of solvent selected, six new solvates, Cur-DOX, Cur-DMSO, Bicur-DOX, Bircur-THF, Demcur-DOX, Demcur-ACN and a new Biscur hydrate, were obtained. Crystal structures were determined for Cur-DOX, Biscur-DOX, Demcur-DOX and Bicur-Water. Moreover, the unit-cell parameter information for Cur-DMSO, Biscur-THF and Demcur-ACN were obtained. The preliminary results showed that the prediction method is showing a promising strategy to evaluate the possibility of forming multicomponent. It is currently working on obtaining multicomponent single crystals.

Keywords: curcumin, HSPs, prediction, solvates, solubility

Procedia PDF Downloads 63
39 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective

Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh

Abstract:

The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.

Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain

Procedia PDF Downloads 111
38 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.

Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis

Procedia PDF Downloads 246
37 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves

Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare

Abstract:

The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.

Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve

Procedia PDF Downloads 43
36 Big Data Applications for the Transport Sector

Authors: Antonella Falanga, Armando Cartenì

Abstract:

Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, cloud computing, decision-making, mobility demand, transportation

Procedia PDF Downloads 62
35 Regenerating Habitats. A Housing Based on Modular Wooden Systems

Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez

Abstract:

Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.

Keywords: modular, timber, flexibility, housing

Procedia PDF Downloads 78
34 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 70
33 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal

Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi

Abstract:

The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.

Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation

Procedia PDF Downloads 67
32 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 60
31 Early Predictive Signs for Kasai Procedure Success

Authors: Medan Isaeva, Anna Degtyareva

Abstract:

Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.

Keywords: biliary atresia, kasai operation, prognostic model, native liver survival

Procedia PDF Downloads 54
30 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade

Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma

Abstract:

Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.

Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments

Procedia PDF Downloads 299
29 Economic Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gas, fossil fuel power plants

Procedia PDF Downloads 397
28 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 387
27 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 179
26 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis

Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma

Abstract:

According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.

Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide

Procedia PDF Downloads 344
25 Monte Carlo Risk Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gases, power plants

Procedia PDF Downloads 471
24 Gene Cloning and Expression of Azoreductases from Azo-Degraders Lysinibacillus macrolides and Bacillus coagulans Isolated from Egyptian Industrial Wastewater

Authors: Omaima A. Sharaf, Wafaa M. Abd El-Rahim, Hassan Moawad, Michael J. Sadowsky

Abstract:

Textile industry is one of the important industries in the worldwide. It is known that the eco-friendly industrial and agricultural activities are significant for socio-economic stability of all countries. The absence of appropriate industrial waste water treatments is essential barrier for sustainable development in food and agricultural sectors especially in developing country like Egypt. Thus, the development of enzymatic bioremediation technology for textile dye removal will enhance the collaboration between scientists who develop the technology and industry where this technology will be implemented towards the safe disposal of the textile dye wastes. Highly efficient microorganisms are of most importance in developing and using highly effective biological treatment processes. Bacterial degradation of azo dyes is generally initiated by an enzymatic step that involves cleavage of azo linkages, usually with the aid of an azoreductase as electron donor. Thus, expanding the spectrum of microorganisms with high enzymatic activities as azoreductases and discovering novel azo-dye degrading enzymes, with enhanced stability and superior catalytic properties, are necessary for many environmental and industrial applications. Consequently, the use of molecular tools has become increasingly integrated into the understanding of enzyme properties and characterization. Researchers have utilized a gene cloning and expression methods as a tool to produce recombinant protein for decolorizing dyes more efficiently. Thus, presumptive evidence for the presence of genes encoding azoreductases in the genomes of selected local, and most potent azo-degrading strains were obtained by using specific oligonucleotides primers. These potent strains have been isolated from textile industrial wastewater in Egypt and identified using 16S rRNA sequence analysis as 'Lysinibacillus macrolidesB8, Brevibacillus parabrevisB11, Bacillus coagulansB7, and B. cereusB5'. PCR products of two full length genes designated as (AZO1;621bp and AZO2;534bp) were detected. BLASTx results indicated that AZO1 gene was corresponding to predicted azoreductase from of Bacillus sp. ABP14, complete genome, multispecies azoreductase [Bacillus], It was submitted to the gene bank by an accession no., BankIt2085371 AZO1 MG923210 (621bp; 207 amino acids). AZO1 was generated from the DNA of our identified strains Lysinibacillus macrolidesB8. On the other hand, AZO2 gene was corresponding to a predicted azoreductase from Bacillus cereus strain S2-8. Gene bank accession no. was BankIt2085839 AZO2 MG932081 (534bp;178 amino acids) and it was amplified from our Bacillus coagulansB7. Both genes were successfully cloned into pCR2.1TOPO (Invitrogen) and in pET28b+ vectors, then they transformed into E. coli DH5α and BL21(DE3) cells for heterologous expression studies. Our recombinant azoreductases (AZO1&AZO2) exhibited potential enzyme activity and efficiently decolorized an azo dye (Direct violet). They exhibited pH stability between 6 and 8 with optimum temperature up to 60°C and 37 °C after induction by 1mM and 1.5mM IPTG, for both AZO1 &AZO2, respectively. These results suggested that further optimization and purification of these recombinant proteins by using different heterologous expression systems will give great potential for the sustainable utilization of these recombinant enzymes in several industrial applications especially in wastewater treatments.

Keywords: azoreductases, decolorization, enzyme activity, gene cloning and expression

Procedia PDF Downloads 129
23 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)

Authors: Salvatore Luongo, Carlo Luongo

Abstract:

This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilities

Keywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification

Procedia PDF Downloads 285
22 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression

Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac

Abstract:

The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.

Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi

Procedia PDF Downloads 196
21 A Case Study Report on Acoustic Impact Assessment and Mitigation of the Hyprob Research Plant

Authors: D. Bianco, A. Sollazzo, M. Barbarino, G. Elia, A. Smoraldi, N. Favaloro

Abstract:

The activities, described in the present paper, have been conducted in the framework of the HYPROB-New Program, carried out by the Italian Aerospace Research Centre (CIRA) promoted and funded by the Italian Ministry of University and Research (MIUR) in order to improve the National background on rocket engine systems for space applications. The Program has the strategic objective to improve National system and technology capabilities in the field of liquid rocket engines (LRE) for future Space Propulsion Systems applications, with specific regard to LOX/LCH4 technology. The main purpose of the HYPROB program is to design and build a Propulsion Test Facility (HIMP) allowing test activities on Liquid Thrusters. The development of skills in liquid rocket propulsion can only pass through extensive test campaign. Following its mission, CIRA has planned the development of new testing facilities and infrastructures for space propulsion characterized by adequate sizes and instrumentation. The IMP test cell is devoted to testing articles representative of small combustion chambers, fed with oxygen and methane, both in liquid and gaseous phase. This article describes the activities that have been carried out for the evaluation of the acoustic impact, and its consequent mitigation. The impact of the simulated acoustic disturbance has been evaluated, first, using an approximated method based on experimental data by Baumann and Coney, included in “Noise and Vibration Control Engineering” edited by Vér and Beranek. This methodology, used to evaluate the free-field radiation of jet in ideal acoustical medium, analyzes in details the jet noise and assumes sources acting at the same time. It considers as principal radiation sources the jet mixing noise, caused by the turbulent mixing of jet gas and the ambient medium. Empirical models, allowing a direct calculation of the Sound Pressure Level, are commonly used for rocket noise simulation. The model named after K. Eldred is probably one of the most exploited in this area. In this paper, an improvement of the Eldred Standard model has been used for a detailed investigation of the acoustical impact of the Hyprob facility. This new formulation contains an explicit expression for the acoustic pressure of each equivalent noise source, in terms of amplitude and phase, allowing the investigation of the sources correlation effects and their propagation through wave equations. In order to enhance the evaluation of the facility acoustic impact, including an assessment of the mitigation strategies to be set in place, a more advanced simulation campaign has been conducted using both an in-house code for noise propagation and scattering, and a commercial code for industrial noise environmental impact, CadnaA. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach allowing the evaluation of the barrier mitigation effect, at the design. This approach has been compared with the analogous empirical/ray-acoustics approach, implemented within CadnaA using a customized definition of sources and directivity factor. The resulting impact evaluation study is reported here, along with the design-level barrier optimization for noise mitigation.

Keywords: acoustic impact, industrial noise, mitigation, rocket noise

Procedia PDF Downloads 146
20 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops

Authors: Simon Komesker, Achim Wagner, Martin Ruskowski

Abstract:

In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.

Keywords: holonic manufacturing system, modular production system, planning, and control, system structure

Procedia PDF Downloads 169
19 Screening and Improved Production of an Extracellular β-Fructofuranosidase from Bacillus Sp

Authors: Lynette Lincoln, Sunil S. More

Abstract:

With the rising demand of sugar used today, it is proposed that world sugar is expected to escalate up to 203 million tonnes by 2021. Hydrolysis of sucrose (table sugar) into glucose and fructose equimolar mixture is catalyzed by β-D-fructofuranoside fructohydrolase (EC 3.2.1.26), commonly called as invertase. For fluid filled center in chocolates, preparation of artificial honey, as a sweetener and especially to ensure that food stuffs remain fresh, moist and soft for longer spans invertase is applied widely and is extensively being used. From an industrial perspective, properties such as increased solubility, osmotic pressure and prevention of crystallization of sugar in food products are highly desired. Screening for invertase does not involve plate assay/qualitative test to determine the enzyme production. In this study, we use a three-step screening strategy for identification of a novel bacterial isolate from soil which is positive for invertase production. The primary step was serial dilution of soil collected from sugarcane fields (black soil, Maddur region of Mandya district, Karnataka, India) was grown on a Czapek-Dox medium (pH 5.0) containing sucrose as the sole C-source. Only colonies with the capability to utilize/breakdown sucrose exhibited growth. Bacterial isolates released invertase in order to take up sucrose, splitting the disaccharide into simple sugars. Secondly, invertase activity was determined from cell free extract by measuring the glucose released in the medium at 540 nm. Morphological observation of the most potent bacteria was examined by several identification tests using Bergey’s manual, which enabled us to know the genus of the isolate to be Bacillus. Furthermore, this potent bacterial colony was subjected to 16S rDNA PCR amplification and a single discrete PCR amplicon band of 1500 bp was observed. The 16S rDNA sequence was used to carry out BLAST alignment search tool of NCBI Genbank database to obtain maximum identity score of sequence. Molecular sequencing and identification was performed by Xcelris Labs Ltd. (Ahmedabad, India). The colony was identified as Bacillus sp. BAB-3434, indicating to be the first novel strain for extracellular invertase production. Molasses, a by-product of the sugarcane industry is a dark viscous liquid obtained upon crystallization of sugar. An enhanced invertase production and optimization studies were carried out by one-factor-at-a-time approach. Crucial parameters such as time course (24 h), pH (6.0), temperature (45 °C), inoculum size (2% v/v), N-source (yeast extract, 0.2% w/v) and C-source (molasses, 4% v/v) were found to be optimum demonstrating an increased yield. The findings of this study reveal a simple screening method of an extracellular invertase from a rapidly growing Bacillus sp., and selection of best factors that elevate enzyme activity especially utilization of molasses which served as an ideal substrate and also as C-source, results in a cost-effective production under submerged conditions. The invert mixture could be a replacement for table sugar which is an economic advantage and reduce the tedious work of sugar growers. On-going studies involve purification of extracellular invertase and determination of transfructosylating activity as at high concentration of sucrose, invertase produces fructooligosaccharides (FOS) which possesses probiotic properties.

Keywords: Bacillus sp., invertase, molasses, screening, submerged fermentation

Procedia PDF Downloads 231
18 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 190
17 Simulation, Design, and 3D Print of Novel Highly Integrated TEG Device with Improved Thermal Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 62
16 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications

Authors: Annika J. Meyer, Tom Piechotta

Abstract:

Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.

Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations

Procedia PDF Downloads 43
15 Finite Element Method (FEM) Simulation, design and 3D Print of Novel Highly Integrated PV-TEG Device with Improved Solar Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 67