Search results for: hybrid spatial-temporal-spectral fusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2186

Search results for: hybrid spatial-temporal-spectral fusion

116 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 190
115 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 222
114 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 162
113 Oligarchic Transitions within the Tunisian Autocratic Authoritarian System and the Struggle for Democratic Transformation: Before and beyond the 2010 Jasmine Revolution

Authors: M. Moncef Khaddar

Abstract:

This paper focuses mainly on a contextualized understanding of ‘autocratic authoritarianism’ in Tunisia without approaching its peculiarities in reference to the ideal type of capitalist-liberal democracy but rather analysing it as a Tunisian ‘civilian dictatorship’. This is reminiscent, to some extent, of the French ‘colonial authoritarianism’ in parallel with the legacy of the traditional formal monarchic absolutism. The Tunisian autocratic political system is here construed as a state manufactured nationalist-populist authoritarianism associated with a de facto presidential single party, two successive autocratic presidents and their subservient autocratic elites who ruled with an iron fist the de-colonialized ‘liberated nation’ that came to be subjected to a large scale oppression and domination under the new Tunisian Republic. The diachronic survey of Tunisia’s autocratic authoritarian system covers the early years of autocracy, under the first autocratic president Bourguiba, 1957-1987, as well as the different stages of its consolidation into a police-security state under the second autocratic president, Ben Ali, 1987-2011. Comparing the policies of authoritarian regimes, within what is identified synchronically as a bi-cephalous autocratic system, entails an in-depth study of the two autocrats, who ruled Tunisia for more than half a century, as modern adaptable autocrats. This is further supported by an exploration of the ruling authoritarian autocratic elites who played a decisive role in shaping the undemocratic state-society relations, under the 1st and 2nd President, and left an indelible mark, structurally and ideologically, on Tunisian polity. Emphasis is also put on the members of the governmental and state-party institutions and apparatuses that kept circulating and recycling from one authoritarian regime to another, and from the first ‘founding’ autocrat to his putschist successor who consolidated authoritarian stability, political continuity and autocratic governance. The reconfiguration of Tunisian political life, in the post-autocratic era, since 2011 will be analysed. This will be scrutinized, especially in light of the unexpected return of many high-profile figures and old guards of the autocratic authoritarian apparatchiks. How and why were, these public figures, from an autocratic era, able to return in a supposedly post-revolutionary moment? Finally, while some continue to celebrate the putative exceptional success of ‘democratic transition’ in Tunisia, within a context of ‘unfinished revolution’, others remain perplexed in the face of a creeping ‘oligarchic transition’ to a ‘hybrid regime’, characterized rather by elites’ reformist tradition than a bottom-up genuine democratic ‘change’. This latter is far from answering the 2010 ordinary people’s ‘uprisings’ and ‘aspirations, for ‘Dignity, Liberty and Social Justice’.

Keywords: authoritarianism, autocracy, democratization, democracy, populism, transition, Tunisia

Procedia PDF Downloads 147
112 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption

Authors: M. François, L. Sigot, C. Vallières

Abstract:

Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.

Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence

Procedia PDF Downloads 237
111 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 164
110 A Measurement Instrument to Determine Curricula Competency of Licensure Track Graduate Psychotherapy Programs in the United States

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

We developed a novel measurement instrument to assess Knowledge of Educational Programs in Professional Psychotherapy Programs (KEP-PPP or KEP-Triple P) within the United States. The instrument was designed by a Panel of Experts (PoE) that consisted of Licensed Psychotherapists and Medical Care Providers. Licensure track psychotherapy programs are listed in the databases of the Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE); American Psychological Association (APA); Council on Social Work Education (CSWE); and the Council for Accreditation of Counseling & Related Educational Programs (CACREP). A complete list of psychotherapy programs can be obtained from these professional databases, selecting search fields of (All Programs) in (All States). Each program has a Web link that electronically and directly connects to the institutional program, which can be researched using the KEP-Triple P. The 29-item KEP Triple P was designed to consist of six categorical fields; Institutional Type: Degree: Educational Delivery: Accreditation: Coursework Competency: and Special Program Considerations. The KEP-Triple P was designed to determine whether a specific course(s) is offered in licensure track psychotherapy programs. The KEP-Triple P is designed to be modified to assess any part or the entire curriculum of licensure graduate programs. We utilized the KEP-Triple P instrument to study whether a graduate course in Addictions was offered in Marriage and Family Therapy (MFT) programs. Marriage and Family Therapists are likely to commonly encounter patients with Addiction(s) due to the broad treatment scope providing psychotherapy services to individuals, couples and families of all age groups. Our study of 124 MFT programs which concluded at the end of 2016 found that we were able to assess 61 % of programs (N = 76) since 27 % (N = 34) of programs were inaccessible due to broken Web links. From the total of all MFT programs 11 % (N = 14) did not have a published curriculum on their Institutional Web site. From the sample study, we found that 66 % (N = 50) of curricula did not offer a course in Addiction Treatment and that 34 % (N =26) of curricula did require a mandatory course in Addiction Treatment. From our study sample, we determined that 15 % (N = 11) of MFT doctorate programs did not require an Addictions Treatment course and that 1 % (N = 1) did require such a course. We found that 99 % of our study sample offered a Campus based program and 1 % offered a hybrid program with both online and residential components. From the total sample studied, we determined that 84 % of programs would be able to obtain reaccreditation within a five-year period. We recommend that MFT programs initiate procedures to revise curricula to include a required course in Addiction Treatment prior to their next accreditation cycle, to improve the escalating addiction crisis in the United States. This disparity in MFT curricula raises serious ethical and legal consideration for national and Federal stakeholders as well as for patients seeking a competently trained psychotherapist.

Keywords: addiction, competency, curriculum, psychotherapy

Procedia PDF Downloads 151
109 Formation of the Water Assisted Supramolecular Assembly in the Transition Structure of Organocatalytic Asymmetric Aldol Reaction: A DFT Study

Authors: Kuheli Chakrabarty, Animesh Ghosh, Atanu Roy, Gourab Kanti Das

Abstract:

Aldol reaction is an important class of carbon-carbon bond forming reactions. One of the popular ways to impose asymmetry in aldol reaction is the introduction of chiral auxiliary that binds the approaching reactants and create dissymmetry in the reaction environment, which finally evolves to enantiomeric excess in the aldol products. The last decade witnesses the usage of natural amino acids as chiral auxiliary to control the stereoselectivity in various carbon-carbon bond forming processes. In this context, L-proline was found to be an effective organocatalyst in asymmetric aldol additions. In last few decades the use of water as solvent or co-solvent in asymmetric organocatalytic reaction is increased sharply. Simple amino acids like L-proline does not catalyze asymmetric aldol reaction in aqueous medium not only that, In organic solvent medium high catalytic loading (~30 mol%) is required to achieve moderate to high asymmetric induction. In this context, huge efforts have been made to modify L-proline and 4-hydroxy-L-proline to prepare organocatalyst for aqueous medium asymmetric aldol reaction. Here, we report the result of our DFT calculations on asymmetric aldol reaction of benzaldehyde, p-NO2 benzaldehyde and t-butyraldehyde with a number of ketones using L-proline hydrazide as organocatalyst in wet solvent free condition. Gaussian 09 program package and Gauss View program were used for the present work. Geometry optimizations were performed using B3LYP hybrid functional and 6-31G(d,p) basis set. Transition structures were confirmed by hessian calculation and IRC calculation. As the reactions were carried out in solvent free condition, No solvent effect were studied theoretically. Present study has revealed for the first time, the direct involvement of two water molecules in the aldol transition structures. In the TS, the enamine and the aldehyde is connected through hydrogen bonding by the assistance of two intervening water molecules forming a supramolecular network. Formation of this type of supramolecular assembly is possible due to the presence of protonated -NH2 group in the L-proline hydrazide moiety, which is responsible for the favorable entropy contribution to the aldol reaction. It is also revealed from the present study that, water assisted TS is energetically more favorable than the TS without involving any water molecule. It can be concluded from this study that, insertion of polar group capable of hydrogen bond formation in the L-proline skeleton can lead to a favorable aldol reaction with significantly high enantiomeric excess in wet solvent free condition by reducing the activation barrier of this reaction.

Keywords: aldol reaction, DFT, organocatalysis, transition structure

Procedia PDF Downloads 433
108 Using a Card Game as a Tool for Developing a Design

Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner

Abstract:

Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.

Keywords: card game, collective songwriting, community of practice, network, postdigital

Procedia PDF Downloads 64
107 Design and Construction of a Solar Dehydration System as a Technological Strategy for Food Sustainability in Difficult-to-Access Territories

Authors: Erika T. Fajardo-Ariza, Luis A. Castillo-Sanabria, Andrea Nieto-Veloza, Carlos M. Zuluaga-Domínguez

Abstract:

The growing emphasis on sustainable food production and preservation has driven the development of innovative solutions to minimize postharvest losses and improve market access for small-scale farmers. This project focuses on designing, constructing, and selecting materials for solar dryers in certain regions of Colombia where inadequate infrastructure limits access to major commercial hubs. Postharvest losses pose a significant challenge, impacting food security and farmer income. Addressing these losses is crucial for enhancing the value of agricultural products and supporting local economies. A comprehensive survey of local farmers revealed substantial challenges, including limited market access, inefficient transportation, and significant postharvest losses. For crops such as coffee, bananas, and citrus fruits, losses range from 0% to 50%, driven by factors like labor shortages, adverse climatic conditions, and transportation difficulties. To address these issues, the project prioritized selecting effective materials for the solar dryer. Various materials, recovered acrylic, original acrylic, glass, and polystyrene, were tested for their performance. The tests showed that recovered acrylic and glass were most effective in increasing the temperature difference between the interior and the external environment. The solar dryer was designed using Fusion 360® software (Autodesk, USA) and adhered to architectural guidelines from Architectural Graphic Standards. It features up to sixteen aluminum trays, each with a maximum load capacity of 3.5 kg, arranged in two levels to optimize drying efficiency. The constructed dryer was then tested with two locally available plant materials: green plantains (Musa paradisiaca L.) and snack bananas (Musa AA Simonds). To monitor performance, Thermo hygrometers and an Arduino system recorded internal and external temperature and humidity at one-minute intervals. Despite challenges such as adverse weather conditions and delays in local government funding, the active involvement of local producers was a significant advantage, fostering ownership and understanding of the project. The solar dryer operated under conditions of 31°C dry bulb temperature (Tbs), 55% relative humidity, and 21°C wet bulb temperature (Tbh). The drying curves showed a consistent drying period with critical moisture content observed between 200 and 300 minutes, followed by a sharp decrease in moisture loss, reaching an equilibrium point after 3,400 minutes. Although the solar dryer requires more time and is highly dependent on atmospheric conditions, it can approach the efficiency of an electric dryer when properly optimized. The successful design and construction of solar dryer systems in difficult-to-access areas represent a significant advancement in agricultural sustainability and postharvest loss reduction. By choosing effective materials such as recovered acrylic and implementing a carefully planned design, the project provides a valuable tool for local farmers. The initiative not only improves the quality and marketability of agricultural products but also offers broader environmental benefits, such as reduced reliance on fossil fuels and decreased waste. Additionally, it supports economic growth by enhancing the value of crops and potentially increasing farmer income. The successful implementation and testing of the dryer, combined with the engagement of local stakeholders, highlight its potential for replication and positive impact in similar contexts.

Keywords: drying technology, postharvest loss reduction, solar dryers, sustainable agriculture

Procedia PDF Downloads 29
106 Analyzing Consumer Preferences and Brand Differentiation in the Notebook Market via Social Media Insights and Expert Evaluations

Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari

Abstract:

This study investigates consumer behavior in the notebook computer market by integrating social media sentiment analysis with expert evaluations. The rapid evolution of the notebook industry has intensified competition among manufacturers, necessitating a deeper understanding of consumer priorities. Social media platforms, particularly Twitter, have become valuable sources for capturing real-time user feedback. In this research, sentiment analysis was performed on Twitter data gathered in the last two years, focusing on seven major notebook brands. The PyABSA framework was utilized to extract sentiments associated with various notebook components, including performance, design, battery life, and price. Expert evaluations, conducted using fuzzy logic, were incorporated to assess the impact of these sentiments on purchase behavior. To provide actionable insights, the TOPSIS method was employed to prioritize notebook features based on a combination of consumer sentiments and expert opinions. The findings consistently highlight price, display quality, and core performance components, such as RAM and CPU, as top priorities across brands. However, lower-priority features, such as webcams and cooling fans, present opportunities for manufacturers to innovate and differentiate their products. The analysis also reveals subtle but significant brand-specific variations, offering targeted insights for marketing and product development strategies. For example, Lenovo's strong performance in display quality points to a competitive edge, while Microsoft's lower ranking in battery life indicates a potential area for R&D investment. This hybrid methodology demonstrates the value of combining big data analytics with expert evaluations, offering a comprehensive framework for understanding consumer behavior in the notebook market. The study emphasizes the importance of aligning product development and marketing strategies with evolving consumer preferences, ensuring competitiveness in a dynamic market. It also underscores the potential for innovation in seemingly less important features, providing companies with opportunities to create unique selling points. By bridging the gap between consumer expectations and product offerings, this research equips manufacturers with the tools needed to remain agile in responding to market trends and enhancing customer satisfaction.

Keywords: consumer behavior, customer preferences, laptop industry, notebook computers, social media analytics, TOPSIS

Procedia PDF Downloads 23
105 Electron Bernstein Wave Heating in the Toroidally Magnetized System

Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten

Abstract:

The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.

Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS

Procedia PDF Downloads 95
104 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 128
103 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery

Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu

Abstract:

Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.

Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor

Procedia PDF Downloads 270
102 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach

Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal

Abstract:

Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.

Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol

Procedia PDF Downloads 108
101 Features of Composites Application in Shipbuilding

Authors: Valerii Levshakov, Olga Fedorova

Abstract:

Specific features of ship structures, made from composites, i.e. simultaneous shaping of material and structure, large sizes, complicated outlines and tapered thickness have defined leading role of technology, integrating test results from material science, designing and structural analysis. Main procedures of composite shipbuilding are contact molding, vacuum molding and winding. Now, the most demanded composite shipbuilding technology is the manufacture of structures from fiberglass and multilayer hybrid composites by means of vacuum molding. This technology enables the manufacture of products with improved strength properties (in comparison with contact molding), reduction of production duration, weight and secures better environmental conditions in production area. Mechanized winding is applied for the manufacture of parts, shaped as rotary bodies – i.e. parts of ship, oil and other pipelines, deep-submergence vehicles hulls, bottles, reservoirs and other structures. This procedure involves processing of reinforcing fiberglass, carbon and polyaramide fibers. Polyaramide fibers have tensile strength of 5000 MPa, elastic modulus value of 130 MPa and rigidity of the same can be compared with rigidity of fiberglass, however, the weight of polyaramide fiber is 30% less than weight of fiberglass. The same enables to the manufacture different structures, including that, using both – fiberglass and organic composites. Organic composites are widely used for the manufacture of parts with size and weight limitations. High price of polyaramide fiber restricts the use of organic composites. Perspective area of winding technology development is the manufacture of carbon fiber shafts and couplings for ships. JSC ‘Shipbuilding & Shiprepair Technology Center’ (JSC SSTC) developed technology of dielectric uncouplers for cryogenic lines, cooled by gaseous or liquid cryogenic agents (helium, nitrogen, etc.) for temperature range 4.2-300 K and pressure up to 30 MPa – the same is used for separating components of electro physical equipment with different electrical potentials. Dielectric uncouplers were developed, the manufactured and tested in accordance with International Thermonuclear Experimental Reactor (ITER) Technical specification. Spiral uncouplers withstand operating voltage of 30 kV, direct-flow uncoupler – 4 kV. Application of spiral channel instead of rectilinear enables increasing of breakdown potential and reduction of uncouplers sizes. 95 uncouplers were successfully the manufactured and tested. At the present time, Russian the manufacturers of ship composite structures have started absorption of technology of manufacturing the same using automated prepreg laminating; this technology enables the manufacture of structures with improved operational specifications.

Keywords: fiberglass, infusion, polymeric composites, winding

Procedia PDF Downloads 238
100 Fabrication of Highly Conductive Graphene/ITO Transparent Bi-Film through Chemical Vapor Deposition (CVD) and Organic Additives-Free Sol-Gel Techniques

Authors: Bastian Waduge Naveen Harindu Hemasiri, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Indium tin oxide (ITO) remains the industrial standard transparent conducting oxides with better performances. Recently, graphene becomes as a strong material with unique properties to replace the ITO. However, graphene/ITO hybrid composite material is a newly born field in the electronic world. In this study, the graphene/ITO composite bi-film was synthesized by a two steps process. 10 wt.% tin-doped, ITO thin films were produced by an environmentally friendly aqueous sol-gel spin coating technique with economical salts of In(NO3)3.H2O and SnCl4 without using organic additives. The wettability and surface free energy (97.6986 mJ/m2) enhanced oxygen plasma treated glass substrates were used to form voids free continuous ITO film. The spin-coated samples were annealed at 600 0C for 1 hour under low vacuum conditions to obtained crystallized, ITO film. The crystal structure and crystalline phases of ITO thin films were analyzed by X-ray diffraction (XRD) technique. The Scherrer equation was used to determine the crystallite size. Detailed information about chemical composition and elemental composition of the ITO film were determined by X-ray photoelectron spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDX) coupled with FE-SEM respectively. Graphene synthesis was done under chemical vapor deposition (CVD) method by using Cu foil at 1000 0C for 1 min. The quality of the synthesized graphene was characterized by Raman spectroscopy (532nm excitation laser beam) and data was collected at room temperature and normal atmosphere. The surface and cross-sectional observation were done by using FE-SEM. The optical transmission and sheet resistance were measured by UV-Vis spectroscopy and four point probe head at room temperature respectively. Electrical properties were also measured by using V-I characteristics. XRD patterns reveal that the films contain the In2O3 phase only and exhibit the polycrystalline nature of the cubic structure with the main peak of (222) plane. The peak positions of In3d5/2 (444.28 eV) and Sn3d5/2 (486.7 eV) in XPS results indicated that indium and tin are in the oxide form only. The UV-visible transmittance shows 91.35 % at 550 nm with 5.88 x 10-3 Ωcm specific resistance. The G and 2D band in Raman spectroscopy of graphene appear at 1582.52 cm-1 and 2690.54 cm-1 respectively when the synthesized CVD graphene on SiO2/Si. The determined intensity ratios of 2D to G (I2D/IG) and D to G (ID/IG) were 1.531 and 0.108 respectively. However, the above-mentioned G and 2D peaks appear at 1573.57 cm-1 and 2668.14 cm-1 respectively when the CVD graphene on the ITO coated glass, the positions of G and 2D peaks were red shifted by 8.948 cm-1 and 22.396 cm-1 respectively. This graphene/ITO bi-film shows modified electrical properties when compares with sol-gel derived ITO film. The reduction of sheet resistance in the bi-film was 12.03 % from the ITO film. Further, the fabricated graphene/ITO bi-film shows 88.66 % transmittance at 550 nm wavelength.

Keywords: chemical vapor deposition, graphene, ITO, Raman Spectroscopy, sol-gel

Procedia PDF Downloads 260
99 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 65
98 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 106
97 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.

Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime

Procedia PDF Downloads 150
96 High Capacity SnO₂/Graphene Composite Anode Materials for Li-Ion Batteries

Authors: Hilal Köse, Şeyma Dombaycıoğlu, Ali Osman Aydın, Hatem Akbulut

Abstract:

Rechargeable lithium-ion batteries (LIBs) have become promising power sources for a wide range of applications, such as mobile communication devices, portable electronic devices and electrical/hybrid vehicles due to their long cycle life, high voltage and high energy density. Graphite, as anode material, has been widely used owing to its extraordinary electronic transport properties, large surface area, and high electrocatalytic activities although its limited specific capacity (372 mAh g-1) cannot fulfil the increasing demand for lithium-ion batteries with higher energy density. To settle this problem, many studies have been taken into consideration to investigate new electrode materials and metal oxide/graphene composites are selected as a kind of promising material for lithium ion batteries as their specific capacities are much higher than graphene. Among them, SnO₂, an n-type and wide band gap semiconductor, has attracted much attention as an anode material for the new-generation lithium-ion batteries with its high theoretical capacity (790 mAh g-1). However, it suffers from large volume changes and agglomeration associated with the Li-ion insertion and extraction processes, which brings about failure and loss of electrical contact of the anode. In addition, there is also a huge irreversible capacity during the first cycle due to the formation of amorphous Li₂O matrix. To obtain high capacity anode materials, we studied on the synthesis and characterization of SnO₂-Graphene nanocomposites and investigated the capacity of this free-standing anode material in this work. For this aim, firstly, graphite oxide was obtained from graphite powder using the method described by Hummers method. To prepare the nanocomposites as free-standing anode, graphite oxide particles were ultrasonicated in distilled water with SnO2 nanoparticles (1:1, w/w). After vacuum filtration, the GO-SnO₂ paper was peeled off from the PVDF membrane to obtain a flexible, free-standing GO paper. Then, GO structure was reduced in hydrazine solution. Produced SnO2- graphene nanocomposites were characterized by scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), and X-ray diffraction (XRD) analyses. CR2016 cells were assembled in a glove box (MBraun-Labstar). The cells were charged and discharged at 25°C between fixed voltage limits (2.5 V to 0.2 V) at a constant current density on a BST8-MA MTI model battery tester with 0.2C charge-discharge rate. Cyclic voltammetry (CV) was performed at the scan rate of 0.1 mVs-1 and electrochemical impedance spectroscopy (EIS) measurements were carried out using Gamry Instrument applying a sine wave of 10 mV amplitude over a frequency range of 1000 kHz-0.01 Hz.

Keywords: SnO₂-graphene, nanocomposite, anode, Li-ion battery

Procedia PDF Downloads 227
95 Fuel Cells Not Only for Cars: Technological Development in Railways

Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz

Abstract:

Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.

Keywords: railway, hydrogen, fuel cells, hybrid vehicles

Procedia PDF Downloads 189
94 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 139
93 Exploring the Success of Live Streaming Commerce in China: A Literature Analysis

Authors: Ming Gao, Matthew Tingchi Liu, Hoi Ngan Loi

Abstract:

Live streaming refers to the video contents generated by broadcasters and shared with viewers in real-time by uploading them to short-video platforms. In recent years, individual KOL broadcasters have successfully made use of live streams to sell a large amount of goods to the consumers. For example, Wei Ya, the Number 1 broadcaster in Taobao Live, sold products worth RMB 2.7 billion (USD 0.38 billion) in 2018. Regarding the success of live streaming commerce (LSC) in China, this study explores the elements of the booming LSC industry and attempts to explain the reasons behind its prosperity. A systematic review of industry reports and academic papers was conducted to summarize the latest findings in this field. And the results of this investigation showed that a live streaming eco-system has been established by the LSC players, namely, the platform, the broadcaster, the product supplier, and the viewer. In this eco-system, all players have complementary advantages and needs, and their close cooperation leads to a win-win situation. For instance, platforms and broadcasters have abundant internet traffic, which needs to be monetized, while product suppliers have mature supply chains and the need of promoting the products. In addition, viewers are attached to the LSC platforms to get product information, bargains, and entertainment. This study highlights the importance of the mass-personal hybrid communication nature of live streaming because its interpersonal communication feature increases consumers’ positive experiences, while its mass media broadcasting feature facilitates product promotion. Another innovative point of this study lies in its inclusion of the special characteristic of Chinese Internet culture - entertainment. The entertaining genres of the live streams created by broadcasters serve as down-to-earth approaches to reach their audiences easily. Further, the nature of video, i.e., the dynamic and salient stimulus, is emphasized in this study. Since video is more engaging, it can attract viewers in a quick and easy way. Meanwhile, the abundant, interesting, high-quality, and free short videos have added “stickiness” to platforms by retaining users and prolonging their staying time on the platforms. In addition, broadcasters’ important characters, such as physical attractiveness, humor, sex appeal, kindness, communication skills, and interactivity, are also identified as important factors that influence consumers’ engagement and purchase intention. In conclusion, all players have their own proper places in this live streaming eco-system, in which they work seamlessly to give full play to their respective advantages, with each player taking what it needs and offering what it has. This has contributed to the success of live streaming commerce in China.

Keywords: broadcasters, communication, entertainment, live streaming commerce, viewers

Procedia PDF Downloads 122
92 Furniko Flour: An Emblematic Traditional Food of Greek Pontic Cuisine

Authors: A. Keramaris, T. Sawidis, E. Kasapidou, P. Mitlianga

Abstract:

Although the gastronomy of the Greeks of Pontus is highly prominent, it has not received the same level of scientific analysis as another local cuisine of Greece, that of Crete. As a result, we intended to focus our research on Greek Pontic cuisine to shed light on its unique recipes, food products, and, ultimately, its features. The Greeks of Pontus, who lived for a long time in the northern part (Black Sea Region) of contemporary Turkey and now widely inhabit northern Greece, have one of Greece's most distinguished local cuisines. Despite their gastronomy being simple, it features several inspiring delicacies. It's been a century since they immigrated to Greece, yet their gastronomic culture remains a critical component of their collective identity. As a first step toward comprehending Greek Pontic cuisine, it was attempted to investigate the production of one of its most renowned traditional products, furniko flour. In this project, we targeted residents of Western Macedonia, a province in northern Greece with a large population of descendants of Greeks of Pontus who are primarily engaged in agricultural activities. In this quest, we approached a descendant of the Greeks of Pontus who is involved in the production of furniko flour and who consented to show us the entire process of its production as we participated in it. The furniko flour is made from non-hybrid heirloom corn. It is harvested by hand when the moisture content of the seeds is low enough to make them suitable for roasting. Manual harvesting entails removing the cob from the plant and detaching the husks. The harvested cobs are then roasted for 24 hours in a traditional wood oven. The roasted cobs are then collected and stored in sacks. The next step is to extract the seeds, which is accomplished by rubbing the cobs. The seeds should ideally be ground in a traditional stone hand mill. We end up with aromatic and dark golden furniko flour, which is used to cook havitz. Accompanied by the preparation of the furnikoflour, we also recorded the cooking process of the havitz (a porridge-like cornflour dish). A savory delicacy that is simple to prepare and one of the most delightful dishes in Greek Pontic cuisine. According to the research participant, havitzis a highly nutritious dish due to the ingredients of furniko flour. In addition, he argues that preparing havitz is a great way to bring families together, share stories, and revisit fond memories. In conclusion, this study illustrates the traditional preparation of furnikoflour and its use in various traditional recipes as an initial effort to highlight the elements of Pontic Greek cuisine. As a continuation of the current study, it could be the analysis of the chemical components of the furniko flour to evaluate its nutritional content.

Keywords: furniko flour, greek pontic cuisine, havitz, traditional foods

Procedia PDF Downloads 136
91 Coupling Strategy for Multi-Scale Simulations in Micro-Channels

Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier

Abstract:

With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.

Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling

Procedia PDF Downloads 166
90 Modified Graphene Oxide in Ceramic Composite

Authors: Natia Jalagonia, Jimsher Maisuradze, Karlo Barbakadze, Tinatin Kuchukhidze

Abstract:

At present intensive scientific researches of ceramics, cermets and metal alloys have been conducted for improving materials physical-mechanical characteristics. In purpose of increasing impact strength of ceramics based on alumina, simple method of graphene homogenization was developed. Homogeneous distribution of graphene (homogenization) in pressing composite became possible through the connection of functional groups of graphene oxide (-OH, -COOH, -O-O- and others) and alumina superficial OH groups with aluminum organic compounds. These two components connect with each other with -O-Al–O- bonds, and by their thermal treatment (300–500°C), graphene and alumina phase are transformed. Thus, choosing of aluminum organic compounds for modification is stipulated by the following opinion: aluminum organic compounds fragments fixed on graphene and alumina finally are transformed into an integral part of the matrix. By using of other elements as modifier on the matrix surface (Al2O3) other phases are transformed, which change sharply physical-mechanical properties of ceramic composites, for this reason, effect caused by the inclusion of graphene will be unknown. Fixing graphene fragments on alumina surface by alumoorganic compounds result in new type graphene-alumina complex, in which these two components are connected by C-O-Al bonds. Part of carbon atoms in graphene oxide are in sp3 hybrid state, so functional groups (-OH, -COOH) are located on both sides of graphene oxide layer. Aluminum organic compound reacts with graphene oxide at the room temperature, and modified graphene oxide is obtained: R2Al-O-[graphene]–COOAlR2. Remaining Al–C bonds also reacts rapidly with surface OH groups of alumina. In a result of these process, pressing powdery composite [Al2O3]-O-Al-O-[graphene]–COO–Al–O–[Al2O3] is obtained. For the purpose, graphene oxide suspension in dry toluene have added alumoorganic compound Al(iC4H9)3 in toluene with equimolecular ratio. Obtained suspension has put in the flask and removed solution in a rotary evaporate presence nitrogen atmosphere. Obtained powdery have been researched and used to consolidation of ceramic materials based on alumina. Ceramic composites are obtained in high temperature vacuum furnace with different temperature and pressure conditions. Received ceramics do not have open pores and their density reaches 99.5 % of TD. During the work, the following devices have been used: High temperature vacuum furnace OXY-GON Industries Inc (USA), device of spark-plasma synthesis, induction furnace, Electronic Scanning Microscopes Nikon Eclipse LV 150, Optical Microscope NMM-800TRF, Planetary mill Pulverisette 7 premium line, Shimadzu Dynamic Ultra Micro Hardness Tester DUH-211S, Analysette 12 Dynasizer and others.

Keywords: graphene oxide, alumo-organic, ceramic

Procedia PDF Downloads 308
89 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction

Authors: Sri Sai Ramya Bojedla, Falguni Pati

Abstract:

Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.

Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers

Procedia PDF Downloads 197
88 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS

Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana

Abstract:

Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).

Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI

Procedia PDF Downloads 113
87 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 130