Search results for: linear matrix inequalities
480 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator
Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li
Abstract:
A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator
Procedia PDF Downloads 154479 Navigating Top Management Team Characteristics for Ambidexterity in Small and Medium-Sized African Businesses: The Key to Unlocking Success
Authors: Rumbidzai Sipiwe Zimvumi
Abstract:
The study aimed to identify the top management team attributes for ambidexterity in small and medium-sized enterprises by utilizing the upper echelons theory. The conventional opinion holds that an organization's ability to pursue both exploitative and explorative innovation methods at the same time is reflected in its ambidexterity. Top-level managers are critical to this matrix because they forecast and explain strategic choices that guarantee success by improving organizational performance. Since the focus of the study was on the unique characteristics of TMTs that can facilitate ambidexterity, the primary goal was to comprehend how TMTs in SMEs can better manage ambidexterity. The study used document analysis to collect information on ambidexterity and TMT traits. Finding, choosing, assessing, and synthesizing data from peer-reviewed publications allowed for the review and evaluation of papers. The fact that SMEs will perform better if they can achieve a balance between exploration and exploitation cannot be overstated. Unfortunately, exploitation is the main priority for most SMEs. The results showed that some of the noteworthy TMT traits that support ambidexterity in SMEs are age diversity, shared responsibility, leadership impact, psychological safety, and self-confidence. It has been shown that most SMEs confront significant obstacles in recruiting people, including formalizing their management and assembling executive teams with seniority. Small and medium-sized enterprises (SMEs) are often held by families or people who neglect to keep their personal lives apart from the firm, which eliminates the opportunity for management and staff to take the initiative. This helps to explain why exploitative strategies, which preserve present success, are used rather than explorative strategies, which open new economic opportunities and dimensions. It is evident that psychological safety deteriorates, and creativity is hindered in the process. The study makes the case that TMTs who are motivated to become ambidextrous can exist. According to the report, small- and medium-sized business owners should value the opinions of all parties involved and provide their managers and regular staff the freedom to think creatively and in a safe environment. TMTs who experience psychological safety are more likely to be inventive, creative, and productive. A team's collective perception that it is acceptable to take chances, voice opinions and concerns, ask questions, and own up to mistakes without fear of unfavorable outcomes is known as team psychological safety. Thus, traits like age diversity, leadership influence, learning agility, psychological safety, and self-assurance are critical to the success of SMEs. As a solution to ensuring ambidexterity is attained, the study suggests a clear separation of ownership and control, the adoption of technology to stimulate creativity, team spirit and excitement, shared accountability, and good management of diversity. Among the suggestions for the SME's success are resource allocation and important collaborations.Keywords: navigating, ambidexterity, top management team, small and medium enterprises
Procedia PDF Downloads 57478 Swedish–Nigerian Extrusion Research: Channel for Traditional Grain Value Addition
Authors: Kalep Filli, Sophia Wassén, Annika Krona, Mats Stading
Abstract:
Food security challenge and the growing population in Sub-Saharan Africa centers on its agricultural transformation, where about 70% of its population is directly involved in farming. Research input can create economic opportunities, reduce malnutrition and poverty, and generate faster, fairer growth. Africa is discarding $4 billion worth of grain annually due to pre and post-harvest losses. Grains and tubers play a central role in food supply in the region but their production has generally lagged behind because no robust scientific input to meet up with the challenge. The African grains are still chronically underutilized to the detriment of the well-being of the people of Africa and elsewhere. The major reason for their underutilization is because they are under-researched. Any commitment by scientific community to intervene needs creative solutions focused on innovative approaches that will meet the economic growth. In order to mitigate this hurdle, co-creation activities and initiatives are necessary.An example of such initiatives has been initiated through Modibbo Adama University of Technology Yola, Nigeria and RISE (The Research Institutes of Sweden) Gothenburg, Sweden. Exchange of expertise in research activities as a possibility to create channel for value addition to agricultural commodities in the region under the ´Traditional Grain Network programme´ is in place. Process technologies, such as extrusion offers the possibility of creating products in the food and feed sectors, with better storage stability, added value, lower transportation cost and new markets. The Swedish–Nigerian initiative has focused on the development of high protein pasta. Dry microscopy of pasta sample result shows a continuous structural framework of proteins and starch matrix. The water absorption index (WAI) results showed that water was absorbed steadily and followed the master curve pattern. The WAI values ranged between 250 – 300%. In all aspect, the water absorption history was within a narrow range for all the eight samples. The total cooking time for all the eight samples in our study ranged between 5 – 6 minutes with their respective dry sample diameter ranging between 1.26 – 1.35 mm. The percentage water solubility index (WSI) ranged from 6.03 – 6.50% which was within a narrow range and the cooking loss which is a measure of WSI is considered as one of the main parameters taken into consideration during the assessment of pasta quality. The protein contents of the samples ranged between 17.33 – 18.60 %. The value of the cooked pasta firmness ranged from 0.28 - 0.86 N. The result shows that increase in ratio of cowpea flour and level of pregelatinized cowpea tends to increase the firmness of the pasta. The breaking strength represent index of toughness of the dry pasta ranged and it ranged from 12.9 - 16.5 MPa.Keywords: cowpea, extrusion, gluten free, high protein, pasta, sorghum
Procedia PDF Downloads 195477 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China
Authors: Jiujie Cai, Fengxia LI, Haibo Wang
Abstract:
After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment
Procedia PDF Downloads 127476 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 330475 Synthesis and Characterization of Fibrin/Polyethylene Glycol-Based Interpenetrating Polymer Networks for Dermal Tissue Engineering
Authors: O. Gsib, U. Peirera, C. Egles, S. A. Bencherif
Abstract:
In skin regenerative medicine, one of the critical issues is to produce a three-dimensional scaffold with optimized porosity for dermal fibroblast infiltration and neovascularization, which exhibits high mechanical properties and displays sufficient wound healing characteristics. In this study, we report on the synthesis and characterization of macroporous sequential interpenetrating polymer networks (IPNs) combining skin wound healing properties of fibrin with the excellent physical properties of polyethylene glycol (PEG). Fibrin fibers serve as a provisional biologically active network to promote cell adhesion and proliferation while PEG provides the mechanical stability to maintain the entire 3D construct. After having modified both PEG and Serum Albumin (used for promoting enzymatic degradability) by adding methacrylate residues (PEGDM and SAM, respectively), Fibrin/PEGDM-SAM sequential IPNs were synthesized as follows: Macroporous sponges were first produced from PEGDM-SAM hydrogels by a freeze-drying technique and then rehydrated by adding the fibrin precursors. Environmental Scanning Electron Microscopy (ESEM) and Confocal Laser Scanning Microscopy (CLSM) were used to characterize their microstructure. Human dermal fibroblasts were cultivated during one week in the constructs and different cell culture parameters (viability, morphology, proliferation) were evaluated. Subcutaneous implantations of the scaffolds were conducted on five-week old male nude mice to investigate their biocompatibility in vivo. We successfully synthesized interconnected and macroporous Fibrin/PEGDM-SAM sequential IPNs. The viability of primary dermal fibroblasts was well maintained (above 90%) after 2 days of culture. Cells were able to adhere, spread and proliferate in the scaffolds suggesting the suitable porosity and intrinsic biologic properties of the constructs. The fibrin network adopted a spider web shape that covered partially the pores allowing easier cell infiltration into the macroporous structure. To further characterize the in vitro cell behavior, cell proliferation (EdU incorporation, MTS assay) is being studied. Preliminary histological analysis of animal studies indicated the persistence of hydrogels even after one-month post implantation and confirmed the absence of inflammation response, good biocompatibility and biointegration of our scaffolds within the surrounding tissues. These results suggest that our Fibrin/PEGDM-SAM IPNs could be considered as potential candidates for dermis regenerative medicine. Histological analysis will be completed to further assess scaffold remodeling including de novo extracellular matrix protein synthesis and early stage angiogenesis analysis. Compression measurements will be conducted to investigate the mechanical properties.Keywords: fibrin, hydrogels for dermal reconstruction, polyethylene glycol, semi-interpenetrating polymer network
Procedia PDF Downloads 236474 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 137473 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations
Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer
Abstract:
In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control
Procedia PDF Downloads 138472 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117471 Mesenchymal Stem Cells on Fibrin Assemblies with Growth Factors
Authors: Elena Filova, Ondrej Kaplan, Marie Markova, Helena Dragounova, Roman Matejka, Eduard Brynda, Lucie Bacakova
Abstract:
Decellularized vessels have been evaluated as small-diameter vascular prostheses. Reseeding autologous cells onto decellularized tissue prior implantation should prolong prostheses function and make them living tissues. Suitable cell types for reseeding are both endothelial cells and bone marrow-derived stem cells, with a capacity for differentiation into smooth muscle cells upon mechanical loading. Endothelial cells assure antithrombogenicity of the vessels and MSCs produce growth factors and, after their differentiation into smooth muscle cells, they are contractile and produce extracellular matrix proteins as well. Fibrin is a natural scaffold, which allows direct cell adhesion based on integrin receptors. It can be prepared autologous. Fibrin can be modified with bound growth factors, such as basic fibroblast growth factor (FGF-2) and vascular endothelial growth factor (VEGF). These modifications in turn make the scaffold more attractive for cells ingrowth into the biological scaffold. The aim of the study was to prepare thin surface-attached fibrin assemblies with bound FGF-2 and VEGF, and to evaluate growth and differentiation of bone marrow-derived mesenchymal stem cells on the fibrin (Fb) assemblies. Following thin surface-attached fibrin assemblies were prepared: Fb, Fb+VEGF, Fb+FGF2, Fb+heparin, Fb+heparin+VEGF, Fb+heparin+FGF2, Fb+heparin+FGF2+VEGF. Cell culture poly-styrene and glass coverslips were used as controls. Human MSCs (passage 3) were seeded at the density of 8800 cells/1.5 mL alpha-MEM medium with 2.5% FS and 200 U/mL aprotinin per well of a 24-well cell culture. The cells have been cultured on the samples for 6 days. Cell densities on day 1, 3, and 6 were analyzed after staining with LIVE/DEAD cytotoxicity/viability assay kit. The differentiation of MSCs is being analyzed using qPCR. On day 1, the highest density of MSCs was observed on Fb+VEGF and Fb+FGF2. On days 3 and 6, there were similar densities on all samples. On day 1, cell morphology was polygonal and spread on all sample. On day 3 and 6, MSCs growing on Fb assemblies with FGF2 became apparently elongated. The evaluation of expression of genes for von Willebrand factor and CD31 (endothelial cells), for alpha-actin (smooth muscle cells), and for alkaline phosphatase (osteoblasts) is in progress. We prepared fibrin assemblies with bound VEGF and FGF-2 that supported attachment and growth of mesenchymal stem cells. The layers are promising for improving the ingrowth of MSCs into the biological scaffold. Supported by the Technology Agency of the Czech Republic TA04011345, and Ministry of Health NT11270-4/2010, and BIOCEV – Biotechnology and Biomedicine Centre of the Academy of Sciences and Charles University” project (CZ.1.05/1.1.00/02.0109), funded by the European Regional Development Fund for their financial supports.Keywords: fibrin assemblies, FGF-2, mesenchymal stem cells, VEGF
Procedia PDF Downloads 325470 Application of Carbon Nanotubes as Cathodic Corrosion Protection of Steel Reinforcement
Authors: M. F. Perez, Ysmael Verde, B. Escobar, R. Barbosa, J. C. Cruz
Abstract:
Reinforced concrete is one of the most important materials in the construction industry. However, in recent years the durability of concrete structures has been a worrying problem, mainly due to corrosion of reinforcing steel; the consequences of corrosion in all cases lead to shortening of the life of the structure and decrease in quality of service. Since the emergence of this problem, they have implemented different methods or techniques to reduce damage by corrosion of reinforcing steel in concrete structures; as the use of polymeric materials as coatings for the steel rod, spiked inhibitors of concrete during mixing, among others, presenting different limitations in the application of these methods. Because of this, it has been used a method that has proved effective, cathodic protection. That is why due to the properties attributed to carbon nanotubes (CNT), these could act as cathodic corrosion protection. Mounting a three-electrode electrochemical cell, carbon steel as working electrode, saturated calomel electrode (SCE) as the reference electrode, and a graphite rod as a counter electrode to close the system is performed. Samples made were subjected to a cycling process in order to compare the results in the corrosion performance of a coating composed of CNT and the others based on an anticorrosive commercial painting. The samples were tested at room temperature using an electrolyte consisting NaCl and NaOH simulating the typical pH of concrete, ranging from 12.6 to 13.9. Three test samples were made of steel rod, white, with commercial anticorrosive paint and CNT based coating; delimiting the work area to a section of 0.71 cm2. Tests cyclic voltammetry and linear voltammetry electrochemical spectroscopy each impedance of the three samples were made with a window of potential vs SCE 0.7 -1.7 a scan rate of 50 mV / s and 100 mV / s. The impedance values were obtained by applying a sine wave of amplitude 50 mV in a frequency range of 100 kHz to 100 MHz. The results obtained in this study show that the CNT based coating applied to the steel rod considerably decreased the corrosion rate compared to the commercial coating of anticorrosive paint, because the Ecorr was passed increase as the cycling process. The samples tested in all three cases were observed by light microscopy throughout the cycling process and micrographic analysis was performed using scanning electron microscopy (SEM). Results from electrochemical measurements show that the application of the coating containing carbon nanotubes on the surface of the steel rod greatly increases the corrosion resistance, compared to commercial anticorrosive coating.Keywords: anticorrosive, carbon nanotubes, corrosion, steel
Procedia PDF Downloads 477469 Riverine Urban Heritage: A Basis for Green Infrastructure
Authors: Ioanna H. Lioliou, Despoina D. Zavraka
Abstract:
The radical reformation that Greek urban space, has undergone over the last century, due to the socio-historical developments, technological development and political–geographic factors, has left its imprint on the urban landscape. While the big cities struggle to regain urban landscape balance, small towns are considered to offer high quality lifescapes, ensuring sustainable development potential. However, their unplanned urbanization process led to the loss of significant areas of nature, lack of essential infrastructure, chaotic built environment, incompatible land uses and urban cohesiveness. Natural environment reference points, such as springs, streams, rivers, forests, suburban greenbelts, and etc.; seems to be detached from urban space, while the public, open and green spaces, unequally distributed in the built environment, they are no longer able to offer a complete experience of nature in the city. This study focuses on Greek mainland, a small town Elassona, and aims to restore spatial coherence between the city’s homonymous river and its urban space surroundings. The existence of a linear aquatic ecosystem, is considered a precious greenway, also referred as blueway, able to initiate natural penetrations and ecosystems empowering. The integration of disconnected natural ecosystems forms the basis of a strategic intervention scheme, where the river becomes the urban integration tool / feature, constituting the main urban corridor and an indispensible part of a wider green network that connects open and green spaces, ensuring the function of all the established networks (transportation, commercial, social) of the town. The proposed intervention, introduces a green network highlighting the old stone bridge at the ‘entrance’ of the river in the town and expanding throughout the town with strategic uses and activities, providing accessibility for all the users. The methodology used, is based on the collection of design tools used in related urban river-design interventions around the world. The reinstallation/reactivation of the balance between natural and urban landscape, besides the environmental benefits, contributes decisively to the illustration/projection of urban green identity and re-enhancement of the quality of lifescape qualities and social interaction.Keywords: green network, rehabilitation scheme, urban landscape, urban streams
Procedia PDF Downloads 280468 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model
Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han
Abstract:
Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model
Procedia PDF Downloads 362467 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears
Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally
Abstract:
Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection
Procedia PDF Downloads 211466 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 195465 The Effect of Seated Distance on Muscle Activation and Joint Kinematics during Seated Strengthening in Patients with Stroke with Extensor Synergy Pattern in the Lower Limbs
Authors: Y. H. Chen, P. Y. Chiang, T. Sugiarto, I. Karsuna, Y. J. Lin, C. C. Chang, W. C. Hsu
Abstract:
Task-specific training with intense practice of functional tasks has been emphasized for the approaches in motor rehabilitation in patients with hemiplegic strokes. Although reciprocal actions which may increase demands on motor control during seated stepping exercise, motor control is not explicitly trained with emphasis and instruction focused on traditional strengthening. Apart from cycling and treadmill, various forms of seated exerciser are becoming available for the lower extremity exercise. The benefit of seated exerciser has been focused on the effect on the cardiopulmonary system. Thus, the aim of current study is to investigate the effect of seated distance on muscle activation during seated strengthening in patients with stroke with extensor synergy pattern in the lower extremities. Electrodes were placed on the surface of lower limbs muscles, including rectus femoris (RF), vastus lateralis (VL), biceps femoris (BF) and gastrocnemius (GT) of both sides. Maximal voluntary contraction (MVC) of the muscles were obtained to normalize the EMG amplitude obtained during dynamic trials with analog raw data digitized with a sampling frequency of 2000 Hz, fully rectified and the linear enveloped. Movement cycle was separated into two phases by pushing (PP) and Return (RP). Integral EMG (iEMG) is then used to quantify level of activation during each of the phases. Subjects performed strengthening with moderate resistance with speed of 60 rpm in two different distances (D1, short) and (D2, long). The results showed greater iEMG in RF and smaller iEMG in VL and BF with obvious increase range of motion of hip flexion in D1 condition. On the contrary, no significant involvement of RF while greater level of muscular activation in VL and BF during RP was found during PP in D2 condition. In addition, greater hip internal rotation was observed in D2 condition. In patients with stroke with abnormal tone revealed by extensor synergy in the lower extremities, shorter seated distance is suggested to facilitate hip flexor muscle activation while avoid inducing hyper extensor tone which may prevent a smooth repetitive motion. Repetitive muscular contraction exercise of hip flexor may be helpful for further gait training as it may assist hip flexion during swing phase of the walking.Keywords: seated strengthening, patients with stroke, electromyography, synergy pattern
Procedia PDF Downloads 214464 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 361463 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 54462 i-Plastic: Surface and Water Column Microplastics From the Coastal North Eastern Atlantic (Portugal)
Authors: Beatriz Rebocho, Elisabete Valente, Carla Palma, Andreia Guilherme, Filipa Bessa, Paula Sobral
Abstract:
The global accumulation of plastic in the oceans is a growing problem. Plastic is transported from its source to the oceans via rivers, which are considered the main route for plastic particles from land-based sources to the ocean. These plastics undergo physical and chemical degradation resulting in microplastics. The i-Plastic project aims to understand and predict the dispersion, accumulation and impacts of microplastics (5 mm to 1 µm) and nano plastics (below 1 µm) in marine environments from the tropical and temperate land-ocean interface to the open ocean under distinct flow and climate regimes. Seasonal monitoring of the fluxes of microplastics was carried out in (three) coastal areas in Brazil, Portugal and Spain. The present work shows the first results of in-situ seasonal monitoring and mapping of microplastics in ocean waters between Ovar and Vieira de Leiria (Portugal), in which 43 surface water samples and 43 water column samples were collected in contrasting seasons (spring and autumn). The spring and autumn surface water samples were collected with a 300 µm and 150 µm pore neuston net, respectively. In both campaigns, water column samples were collected using a conical mesh with a 150 µm pore. The experimental procedure comprises the following steps: i) sieving by a metal sieve; ii) digestion with potassium hydroxide to remove the organic matter original from the sample matrix. After a filtration step, the content is retained on a membrane and observed under a stereomicroscope, and physical and chemical characterization (type, color, size, and polymer composition) of the microparticles is performed. Results showed that 84% and 88% of the surface water and water column samples were contaminated with microplastics, respectively. Surface water samples collected during the spring campaign averaged 0.35 MP.m-3, while surface water samples collected during autumn recorded 0.39 MP.m-3. Water column samples from the spring campaign had an average of 1.46 MP.m-3, while those from the autumn recorded 2.54 MP.m-3. In the spring, all microplastics found were fibers, predominantly black and blue. In autumn, the dominant particles found in the surface waters were fibers, while in the water column, fragments were dominant. In spring, the average size of surface water particles was 888 μm, while in the water column was 1063 μm. In autumn, the average size of surface and water column microplastics was 1333 μm and 1393 μm, respectively. The main polymers identified by Attenuated Total Reflectance (ATR) and micro-ATR Fourier Transform Infrared (FTIR) spectroscopy from all samples were low-density polyethylene (LDPE), polypropylene (PP), polyethylene terephthalate (PET), and polyvinyl chloride (PVC). The significant difference between the microplastic concentration in the water column between the two campaigns could be due to the remixing of the water masses that occurred that week due to the occurrence of a storm. This work presents preliminary results since the i-Plastic project is still in progress. These results will contribute to the understanding of the spatial and temporal dispersion and accumulation of microplastics in this marine environment.Keywords: microplastics, Portugal, Atlantic Ocean, water column, surface water
Procedia PDF Downloads 80461 Influence of High-Resolution Satellites Attitude Parameters on Image Quality
Authors: Walid Wahballah, Taher Bazan, Fawzy Eltohamy
Abstract:
One of the important functions of the satellite attitude control system is to provide the required pointing accuracy and attitude stability for optical remote sensing satellites to achieve good image quality. Although offering noise reduction and increased sensitivity, time delay and integration (TDI) charge coupled devices (CCDs) utilized in high-resolution satellites (HRS) are prone to introduce large amounts of pixel smear due to the instability of the line of sight. During on-orbit imaging, as a result of the Earth’s rotation and the satellite platform instability, the moving direction of the TDI-CCD linear array and the imaging direction of the camera become different. The speed of the image moving on the image plane (focal plane) represents the image motion velocity whereas the angle between the two directions is known as the drift angle (β). The drift angle occurs due to the rotation of the earth around its axis during satellite imaging; affecting the geometric accuracy and, consequently, causing image quality degradation. Therefore, the image motion velocity vector and the drift angle are two important factors used in the assessment of the image quality of TDI-CCD based optical remote sensing satellites. A model for estimating the image motion velocity and the drift angle in HRS is derived. The six satellite attitude control parameters represented in the derived model are the (roll angle φ, pitch angle θ, yaw angle ψ, roll angular velocity φ֗, pitch angular velocity θ֗ and yaw angular velocity ψ֗ ). The influence of these attitude parameters on the image quality is analyzed by establishing a relationship between the image motion velocity vector, drift angle and the six satellite attitude parameters. The influence of the satellite attitude parameters on the image quality is assessed by the presented model in terms of modulation transfer function (MTF) in both cross- and along-track directions. Three different cases representing the effect of pointing accuracy (φ, θ, ψ) bias are considered using four different sets of pointing accuracy typical values, while the satellite attitude stability parameters are ideal. In the same manner, the influence of satellite attitude stability (φ֗, θ֗, ψ֗) on image quality is also analysed for ideal pointing accuracy parameters. The results reveal that cross-track image quality is influenced seriously by the yaw angle bias and the roll angular velocity bias, while along-track image quality is influenced only by the pitch angular velocity bias.Keywords: high-resolution satellites, pointing accuracy, attitude stability, TDI-CCD, smear, MTF
Procedia PDF Downloads 402460 The Impact of the Variation of Sky View Factor on Landscape Degree of Enclosure of Urban Blue and Green Belt
Authors: Yi-Chun Huang, Kuan-Yun Chen, Chuang-Hung Lin
Abstract:
Urban Green Belt and Blue is a part of the city landscape, it is an important constituent element of the urban environment and appearance. The Hsinchu East Gate Moat is situated in the center of the city, which not only has a wealth of historical and cultural resources, but also combines the Green Belt and the Blue Belt qualities at the same time. The Moat runs more than a thousand meters through the vital Green Belt and the Blue Belt in downtown, and each section is presented in different qualities of moat from south to north. The water area and the green belt of surroundings are presented linear and banded spread. The water body and the rich diverse river banks form an urban green belt of rich layers. The watercourse with green belt design lets users have connections with blue belts in different ways; therefore, the integration of Hsinchu East Gate and moat have become one of the unique urban landscapes in Taiwan. The study is based on the fact-finding case of Hsinchu East Gate Moat where situated in northern Taiwan, to research the impact between the SVF variation of the city and spatial sequence of Urban Green Belt and Blue landscape and visual analysis by constituent cross-section, and then comparing the influence of different leaf area index – the variable ecological factors to the degree of enclosure. We proceed to survey the landscape design of open space, to measure existing structural features of the plant canopy which contain the height of plants and branches, the crown diameter, breast-height diameter through access to diagram of Geographic Information Systems (GIS) and on-the-spot actual measurement. The north and south districts of blue green belt areas are divided 20 meters into a unit from East Gate Roundabout as the epicenter, and to set up a survey points to measure the SVF above the survey points; then we proceed to quantitative analysis from the data to calculate open landscape degree of enclosure. The results can be reference for the composition of future river landscape and the practical operation for dynamic space planning of blue and green belt landscape.Keywords: sky view factor, degree of enclosure, spatial sequence, leaf area indices
Procedia PDF Downloads 556459 Nanoporous Activated Carbons for Fuel Cells and Supercapacitors
Authors: A. Volperts, G. Dobele, A. Zhurinsh, I. Kruusenberg, A. Plavniece, J. Locs
Abstract:
Nowadays energy consumption constantly increases and development of effective and cheap electrochemical sources of power, such as fuel cells and electrochemical capacitors, is topical. Due to their high specific power, charge and discharge rates, working lifetime supercapacitor based energy accumulation systems are more and more extensively being used in mobile and stationary devices. Lignocellulosic materials are widely used as precursors and account for around 45% of the total raw materials used for the manufacture of activated carbon which is the most suitable material for supercapacitors. First part of our research is devoted to study of influence of main stages of wood thermochemical activation parameters on activated carbons porous structure formation. It was found that the main factors governing the properties of carbon materials are specific surface area, volume and pore size distribution, particles dispersity, ash content and oxygen containing groups content. Influence of activated carbons attributes on capacitance and working properties of supercapacitor are demonstrated. The correlation between activated carbons porous structure indices and electrochemical specifications of supercapacitors with electrodes made from these materials has been determined. It is shown that if synthesized activated carbons are used in supercapacitors then high specific capacitances can be reached – more than 380 F/g in 4.9M sulfuric acid based electrolytes and more than 170 F/g in 1 M tetraethylammonium tetrafluoroborate in acetonitrile electrolyte. Power specifications and minimal price of H₂-O₂ fuel cells are limited by the expensive platinum-based catalysts. The main direction in development of non-platinum catalysts for the oxygen reduction is the study of cheap porous carbonaceous materials which can be obtained by the pyrolysis of polymers including renewable biomass. It is known that nitrogen atoms in carbon materials to a high degree determine properties of the doped activated carbons, such as high electrochemical stability, hardness, electric resistance, etc. The lack of sufficient knowledge on the doping of the carbon materials calls for the ongoing researches of properties and structure of modified carbon matrix. In the second part of this study, highly porous activated carbons were synthesized using alkali thermochemical activation from wood, cellulose and cellulose production residues – craft lignin and sewage sludge. Activated carbon samples were doped with dicyandiamide and melamine for the application as fuel cell cathodes. Conditions of nitrogen introduction (solvent, treatment temperature) and its content in the carbonaceous material, as well as porous structure characteristics, such as specific surface and pore size distribution, were studied. It was found that efficiency of doping reaction depends on the elemental oxygen content in the activated carbon. Relationships between nitrogen content, porous structure characteristics and electrodes electrochemical properties are demonstrated.Keywords: activated carbons, low-temperature fuel cells, nitrogen doping, porous structure, supercapacitors
Procedia PDF Downloads 120458 Enhancement Effect of Superparamagnetic Iron Oxide Nanoparticle-Based MRI Contrast Agent at Different Concentrations and Magnetic Field Strengths
Authors: Bimali Sanjeevani Weerakoon, Toshiaki Osuga, Takehisa Konishi
Abstract:
Magnetic Resonance Imaging Contrast Agents (MRI-CM) are significant in the clinical and biological imaging as they have the ability to alter the normal tissue contrast, thereby affecting the signal intensity to enhance the visibility and detectability of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles, coated with dextran or carboxydextran are currently available for clinical MR imaging of the liver. Most SPIO contrast agents are T2 shortening agents and Resovist (Ferucarbotran) is one of a clinically tested, organ-specific, SPIO agent which has a low molecular carboxydextran coating. The enhancement effect of Resovist depends on its relaxivity which in turn depends on factors like magnetic field strength, concentrations, nanoparticle properties, pH and temperature. Therefore, this study was conducted to investigate the impact of field strength and different contrast concentrations on enhancement effects of Resovist. The study explored the MRI signal intensity of Resovist in the physiological range of plasma from T2-weighted spin echo sequence at three magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4, r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast concentrations by a mathematical simulation. Relaxivities of r1 and r2 (L mmol-1 Sec-1) were obtained from a previous study and the selected concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were simulated using TR/TE ratio as 2000 ms /100 ms. According to the reference literature, with increasing magnetic field strengths, the r1 relaxivity tends to decrease while the r2 did not show any systematic relationship with the selected field strengths. In parallel, this study results revealed that the signal intensity of Resovist at lower concentrations tends to increase than the higher concentrations. The highest reported signal intensity was observed in the low field strength of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L, respectively. Furthermore, it was revealed that, the concentrations higher than the above, the signal intensity was decreased exponentially. An inverse relationship can be found between the field strength and T2 relaxation time, whereas, the field strength was increased, T2 relaxation time was decreased accordingly. However, resulted T2 relaxation time was not significantly different between 0.47 T and 1.5 T in this study. Moreover, a linear correlation of transverse relaxation rates (1/T2, s–1) with the concentrations of Resovist can be observed. According to these results, it can conclude that the concentration of SPIO nanoparticle contrast agents and the field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR imaging those two parameters should be considered prudently.Keywords: Concentration, resovist, field strength, relaxivity, signal intensity
Procedia PDF Downloads 352457 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds
Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel
Abstract:
Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction
Procedia PDF Downloads 168456 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 38455 Polymer Composites Containing Gold Nanoparticles for Biomedical Use
Authors: Bozena Tyliszczak, Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Agnieszka Sobczak-Kupiec
Abstract:
Introduction: Nanomaterials become one of the leading materials in the synthesis of various compounds. This is a reason for the fact that nano-size materials exhibit other properties compared to their macroscopic equivalents. Such a change in size is reflected in a change in optical, electric or mechanical properties. Among nanomaterials, particular attention is currently directed into gold nanoparticles. They find application in a wide range of areas including cosmetology or pharmacy. Additionally, nanogold may be a component of modern wound dressings, which antibacterial activity is beneficial in the viewpoint of the wound healing process. Specific properties of this type of nanomaterials result in the fact that they may also be applied in cancer treatment. Studies on the development of new techniques of the delivery of drugs are currently an important research subject of many scientists. This is due to the fact that along with the development of such fields of science as medicine or pharmacy, the need for better and more effective methods of administering drugs is constantly growing. The solution may be the use of drug carriers. These are materials that combine with the active substance and lead it directly to the desired place. A role of such a carrier may be played by gold nanoparticles that are able to covalently bond with many organic substances. This allows the combination of nanoparticles with active substances. Therefore gold nanoparticles are widely used in the preparation of nanocomposites that may be used for medical purposes with special emphasis on drug delivery. Methodology: As part of the presented research, synthesis of composites was carried out. The mentioned composites consisted of the polymer matrix and gold nanoparticles that were introduced into the polymer network. The synthesis was conducted with the use of a crosslinking agent, and photoinitiator and the materials were obtained by means of the photopolymerization process. Next, incubation studies were conducted using selected liquids that simulated fluids are occurring in the human body. The study allows determining the biocompatibility of the tested composites in relation to selected environments. Next, the chemical structure of the composites was characterized as well as their sorption properties. Conclusions: Conducted research allowed for the preliminary characterization of prepared polymer composites containing gold nanoparticles in the viewpoint of their application for biomedical use. Tested materials were characterized by biocompatibility in tested environments. What is more, synthesized composites exhibited relatively high swelling capacity that is essential in the viewpoint of their potential application as drug carriers. During such an application, composite swells and at the same time releases from its interior introduced active substance; therefore, it is important to check the swelling ability of such material. Acknowledgements: The authors would like to thank The National Science Centre (Grant no: UMO - 2016/21/D/ST8/01697) for providing financial support to this project. This paper is based upon work from COST Action (CA18113), supported by COST (European Cooperation in Science and Technology).Keywords: nanocomposites, gold nanoparticles, drug carriers, swelling properties
Procedia PDF Downloads 116454 The Effects of in vitro Digestion on Cheese Bioactivity; Comparing Adult and Elderly Simulated in vitro Gastrointestinal Digestion Models
Authors: A. M. Plante, F. O’Halloran, A. L. McCarthy
Abstract:
By 2050 it is projected that 2 billion of the global population will be more than 60 years old. Older adults have unique dietary requirements and aging is associated with physiological changes that affect appetite, sensory perception, metabolism, and digestion. Therefore, it is essential that foods recommended and designed for older adults promote healthy aging. To assess cheese as a functional food for the elderly, a range of commercial cheese products were selected and compared for their antioxidant properties. Cheese from various milk sources (bovine, goats, sheep) with different textures and fat content, including cheddar, feta, goats, brie, roquefort, halloumi, wensleydale and gouda, were initially digested with two different simulated in vitro gastrointestinal digestion (SGID) models. One SGID model represented a validated in vitro adult digestion system and the second model, an elderly SGID, was designed to consider the physiological changes associated with aging. The antioxidant potential of all cheese digestates was investigated using in vitro chemical-based antioxidant assays, (2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, ferric reducing antioxidant power (FRAP) and total phenolic content (TPC)). All adult model digestates had high antioxidant activity across both DPPH ( > 70%) and FRAP ( > 700 µM Fe²⁺/kg.fw) assays. Following in vitro digestion using the elderly SGID model, full-fat red cheddar, low-fat white cheddar, roquefort, halloumi, wensleydale, and gouda digestates had significantly lower (p ≤ 0.05) DPPH radical scavenging properties compared to the adult model digestates. Full-fat white cheddar had higher DPPH radical scavenging activity following elderly SGID digestion compared to the adult model digestate, but the difference was not significant. All other cheese digestates from the elderly model were comparable to the digestates from the adult model in terms of radical scavenging activity. The FRAP of all elderly digestates were significantly lower (p ≤ 0.05) compared to the adult digestates. Goats cheese was significantly higher (p ≤ 0.05) in FRAP (718 µM Fe²/kg.fw) compared to all other digestates in the elderly model. TPC levels in the soft cheeses (feta, goats) and low-fat cheeses (red cheddar, white cheddar) were significantly lower (p ≤ 0.05) in the elderly digestates compared to the adult digestates. There was no significant difference in TPC levels, between the elderly and adult model for full-fat cheddar (red, white), roquefort, wensleydale, gouda, and brie digestates. Halloumi cheese was the only cheese that was significantly higher in TPC levels following elderly digestion compared to adult digestates. Low fat red cheddar had significantly higher (p ≤ 0.05) TPC levels compared to all other digestates for both adult and elderly digestive systems. Findings from this study demonstrate that aging has an impact on the bioactivity of cheese, as antioxidant activity and TPC levels were lower, following in vitro elderly digestion compared to the adult model. For older adults, soft cheese, particularly goats cheese, was associated with high radical scavenging and reducing power, while roquefort cheese had low antioxidant activity. Also, elderly digestates of halloumi and low-fat red cheddar were associated with high TPC levels. Cheese has potential as a functional food for the elderly, however, bioactivity can vary depending on the cheese matrix. Funding for this research was provided by the RISAM Scholarship Scheme, Cork Institute of Technology, Ireland.Keywords: antioxidants, cheese, in-vitro digestion, older adults
Procedia PDF Downloads 228453 Walking Cadence to Attain a Minimum of Moderate Aerobic Intensity in People at Risk of Cardiovascular Diseases
Authors: Fagner O. Serrano, Danielle R. Bouchard, Todd A. Duhame
Abstract:
Walking cadence (steps/min) is an effective way to prescribe exercise so an individual can reach a moderate intensity, which is recommended to optimize health benefits. To our knowledge, there is no study on the required walking cadence to reach a moderate intensity for people that present chronic conditions or risk factors for chronic conditions such as Cardiovascular Diseases (CVD). The objectives of this study were: 1- to identify the walking cadence needed for people at risk of CVD to a reach moderate intensity, and 2- to develop and test an equation using clinical variables to help professionals working with individuals at risk of CVD to estimate the walking cadence needed to reach moderate intensity. Ninety-one people presenting a minimum of two risk factors for CVD completed a medically supervised graded exercise test to assess maximum oxygen consumption at the first visit. The last visit consisted of recording walking cadence using a foot pod Garmin FR-60 and a Polar heart rate monitor, aiming to get participants to reach 40% of their maximal oxygen consumption using a portable metabolic cart on an indoor flat surface. The equation to predict the walking cadence needed to reach moderate intensity in this sample was developed as follows: The sample was randomly split in half and the equation was developed with one half of the participants, and validated using the other half. Body mass index, height, stride length, leg height, body weight, fitness level (VO2max), and self-selected cadence (over 200 meters) were measured using objective measured. Mean walking cadence to reach moderate intensity for people age 64.3 ± 10.3 years old at risk of CVD was 115.8 10.3 steps per minute. Body mass index, height, body weight, fitness level, and self-selected cadence were associated with walking cadence at moderate intensity when evaluated in bivariate analyses (r ranging from 0.22 to 0.52; all P values ≤0.05). Using linear regression analysis including all clinical variables associated in the bivariate analyses, body weight was the significant predictor of walking cadence for reaching a moderate intensity (ß=0.24; P=.018) explaining 13% of walking cadence to reach moderate intensity. The regression model created was Y = 134.4-0.24 X body weight (kg).Our findings suggest that people presenting two or more risk factors for CVD are reaching moderate intensity while walking at a cadence above the one officially recommended (116 steps per minute vs. 100 steps per minute) for healthy adults.Keywords: cardiovascular disease, moderate intensity, older adults, walking cadence
Procedia PDF Downloads 443452 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 78451 Diet and Exercise Intervention and Bio–Atherogenic Markers for Obesity Classes of Black South Africans with Type 2 Diabetes Mellitus Using Discriminant Analysis
Authors: Oladele V. Adeniyi, B. Longo-Mbenza, Daniel T. Goon
Abstract:
Background: Lipids are often low or in the normal ranges and controversial in the atherogenesis among Black Africans. The effect of the severity of obesity on some traditional and novel cardiovascular disease risk factors is unclear before and after a diet and exercise maintenance programme among obese black South Africans with type 2 diabetes mellitus (T2DM). Therefore, this study aimed to identify the risk factors to discriminate obesity classes among patients with T2DM before and after a diet and exercise programme. Methods: This interventional cohort of Black South Africans with T2DM was followed by a very – low calorie diet and exercise programme in Mthatha, between August and November 2013. Gender, age, and the levels of body mass index (BMI), blood pressure, monthly income, daily frequency of meals, blood random plasma glucose (RPG), serum creatinine, total cholesterol (TC), triglycerides (TG), LDL –C, HDL – C, Non-HDL, ratios of TC/HDL, TG/HDL, and LDL/HDL were recorded. Univariate analysis (ANOVA) and multivariate discriminant analysis were performed to separate obesity classes: normal weight (BMI = 18.5 – 24.9 kg/m2), overweight (BMI = 25 – 29.9 kg/m2), obesity Class 1 (BMI = 30 – 34.9 kg/m2), obesity Class 2 (BMI = 35 – 39.9 kg/m2), and obesity Class 3 (BMI ≥ 40 kg/m2). Results: At the baseline (1st Month September), all 327 patients were overweight/obese: 19.6% overweight, 42.8% obese class 1, 22.3% obese class 2, and 15.3% obese class 3. In discriminant analysis, only systolic blood pressure (SBP with positive association) and LDL/HDL ratio (negative association) significantly separated increasing obesity classes. At the post – evaluation (3rd Month November), out of all 327 patients, 19.9%, 19.3%, 37.6%, 15%, and 8.3% had normal weight, overweight, obesity class 1, obesity class 2, and obesity class 3, respectively. There was a significant negative association between serum creatinine and increase in BMI. In discriminant analysis, only age (positive association), SBP (U – shaped relationship), monthly income (inverted U – shaped association), daily frequency of meals (positive association), and LDL/HDL ratio (positive association) classified significantly increasing obesity classes. Conclusion: There is an epidemic of diabesity (Obesity + T2DM) in this Black South Africans with some weight loss. Further studies are needed to understand positive or negative linear correlations and paradoxical curvilinear correlations between these markers and increase in BMI among black South African T2DM patients.Keywords: atherogenic dyslipidaemia, dietary interventions, obesity, south africans
Procedia PDF Downloads 367