Search results for: smart glasses
92 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads
Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon
Abstract:
The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads
Procedia PDF Downloads 27091 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 7390 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 26889 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning
Authors: Shayan Mohajer Hamidi
Abstract:
Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning
Procedia PDF Downloads 7588 Lessons Learned from a Chronic Care Behavior Change Program: Outcome to Make Physical Activity a Habit
Authors: Doaa Alhaboby
Abstract:
Behavior change is a complex process that often requires ongoing support and guidance. Telecoaching programs have emerged as effective tools in facilitating behavior change by providing personalized support remotely. This abstract explores the lessons learned from a randomized controlled trial (RCT) evaluation of a telecoaching program focused on behavior change for Diabetics and discusses strategies for implementing these lessons to overcome the challenge of making physical activity a habit. The telecoaching program involved participants engaging in regular coaching sessions delivered via phone calls. These sessions aimed to address various aspects of behavior change, including goal setting, self-monitoring, problem-solving, and social support. Over the course of the program, participants received personalized guidance tailored to their unique needs and preferences. One of the key lessons learned from the RCT was the importance of engagement, readiness to change and the use of technology. Participants who set specific, measurable, attainable, relevant, and time-bound (SMART) goals were more likely to make sustained progress toward behavior change. Additionally, regular self-monitoring of behavior and progress was found to be instrumental in promoting accountability and motivation. Moving forward, implementing the lessons learned from the RCT can help individuals overcome the hardest part of behavior change: making physical activity a habit. One strategy is to prioritize consistency and establish a regular routine for physical activity. This may involve scheduling workouts at the same time each day or week and treating them as non-negotiable appointments. Additionally, integrating physical activity into daily life routines and taking into consideration the main challenges that can stop the process of integrating physical activity routines into the daily schedule can help make it more habitual. Furthermore, leveraging technology and digital tools can enhance adherence to physical activity goals. Mobile apps, wearable activity trackers, and online fitness communities can provide ongoing support, motivation, and accountability. These tools can also facilitate self-monitoring of behavior and progress, allowing individuals to track their activity levels and adjust their goals as needed. In conclusion, telecoaching programs offer valuable insights into behavior change and provide strategies for overcoming challenges, such as making physical activity a habit. By applying the lessons learned from these programs and incorporating them into daily life, individuals can cultivate sustainable habits that support their long-term health and well-being.Keywords: lifestyle, behavior change, physical activity, chronic conditions
Procedia PDF Downloads 5987 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 37786 Smart Irrigation System for Applied Irrigation Management in Tomato Seedling Production
Authors: Catariny C. Aleman, Flavio B. Campos, Matheus A. Caliman, Everardo C. Mantovani
Abstract:
The seedling production stage is a critical point in the vegetable production system. Obtaining high-quality seedlings is a prerequisite for subsequent cropping to occur well and productivity optimization is required. The water management is an important step in agriculture production. The adequate water requirement in horticulture seedlings can provide higher quality and increase field production. The practice of irrigation is indispensable and requires a duly adjusted quality irrigation system, together with a specific water management plan to meet the water demand of the crop. Irrigation management in seedling management requires a great deal of specific information, especially when it involves the use of inputs such as hydrorentering polymers and automation technologies of the data acquisition and irrigation system. The experiment was conducted in a greenhouse at the Federal University of Viçosa, Viçosa - MG. Tomato seedlings (Lycopersicon esculentum Mill) were produced in plastic trays of 128 cells, suspended at 1.25 m from the ground. The seedlings were irrigated by 4 micro sprinklers of fixed jet 360º per tray, duly isolated by sideboards, following the methodology developed for this work. During Phase 1, in January / February 2017 (duration of 24 days), the cultivation coefficient (Kc) of seedlings cultured in the presence and absence of hydrogel was evaluated by weighing lysimeter. In Phase 2, September 2017 (duration of 25 days), the seedlings were submitted to 4 irrigation managements (Kc, timer, 0.50 ETo, and 1.00 ETo), in the presence and absence of hydrogel and then evaluated in relation to quality parameters. The microclimate inside the greenhouse was monitored with the use of air temperature, relative humidity and global radiation sensors connected to a microcontroller that performed hourly calculations of reference evapotranspiration by Penman-Monteith standard method FAO56 modified for the balance of long waves according to Walker, Aldrich, Short (1983), and conducted water balance and irrigation decision making for each experimental treatment. Kc of seedlings cultured on a substrate with hydrogel (1.55) was higher than Kc on a pure substrate (1.39). The use of the hydrogel was a differential for the production of earlier tomato seedlings, with higher final height, the larger diameter of the colon, greater accumulation of a dry mass of shoot, a larger area of crown projection and greater the rate of relative growth. The handling 1.00 ETo promoted higher relative growth rate.Keywords: automatic system; efficiency of water use; precision irrigation, micro sprinkler.
Procedia PDF Downloads 11685 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 31384 Reliability of 2D Motion Analysis System for Sagittal Plane Lower Limb Kinematics during Running
Authors: Seyed Hamed Mousavi, Juha M. Hijmans, Reza Rajabi, Ron Diercks, Johannes Zwerver, Henk van der Worp
Abstract:
Introduction: Running is one of the most popular sports activity among people. Improper sagittal plane ankle, knee and hip kinematics are considered to be associated with the increase of injury risk in runners. Motion assessing smart-phone applications are increasingly used to measure kinematics both in the field and laboratory setting, as they are cheaper, more portable, accessible, and easier to use relative to 3D motion analysis system. The aims of this study are 1) to compare the results of 3D gait analysis system and CE; 2) to evaluate the test-retest and intra-rater reliability of coach’s eye (CE) app for the sagittal plane hip, knee, and ankle angles in the touchdown and toe-off while running. Method: Twenty subjects participated in this study. Sixteen reflective markers and cluster markers were attached to the subject’s body. Subjects were asked to run at a self-selected speed on a treadmill. Twenty-five seconds of running were collected for analyzing kinematics of interest. To measure sagittal plane hip, knee and ankle joint angles at touchdown (TD) and toe off (TO), the mean of first ten acceptable consecutive strides was calculated for each angle. A smartphone (Samsung Note5, android) was placed on the right side of the subject so that whole body was simultaneously filmed with 3D gait system during running. All subjects repeated the task with the same running speed after a short interval of 5 minutes in between. The CE app, installed on the smartphone, was used to measure the sagittal plane hip, knee and ankle joint angles at touchdown and toe off the stance phase. Results: Intraclass correlation coefficient (ICC) was used to assess test-retest and intra-rater reliability. To analyze the agreement between 3D and 2D outcomes, the Bland and Altman plot was used. The values of ICC were for Ankle at TD (TRR=0.8,IRR=0.94), ankle at TO (TRR=0.9,IRR=0.97), knee at TD (TRR=0.78,IRR=0.98), knee at TO (TRR=0.9,IRR=0.96), hip at TD (TRR=0.75,IRR=0.97), hip at TO (TRR=0.87,IRR=0.98). The Bland and Altman plots displaying a mean difference (MD) and ±2 standard deviation of MD (2SDMD) of 3D and 2D outcomes were for Ankle at TD (MD=3.71,+2SDMD=8.19, -2SDMD=-0.77), ankle at TO (MD=-1.27, +2SDMD=6.22, -2SDMD=-8.76), knee at TD (MD=1.48, +2SDMD=8.21, -2SDMD=-5.25), knee at TO (MD=-6.63, +2SDMD=3.94, -2SDMD=-17.19), hip at TD (MD=1.51, +2SDMD=9.05, -2SDMD=-6.03), hip at TO (MD=-0.18, +2SDMD=12.22, -2SDMD=-12.59). Discussion: The ability that the measurements are accurately reproduced is valuable in the performance and clinical assessment of outcomes of joint angles. The results of this study showed that the intra-rater and test-retest reliability of CE app for all kinematics measured are excellent (ICC ≥ 0.75). The Bland and Altman plots display that there are high differences of values for ankle at TD and knee at TO. Measuring ankle at TD by 2D gait analysis depends on the plane of movement. Since ankle at TD mostly occurs in the none-sagittal plane, the measurements can be different as foot progression angle at TD increases during running. The difference in values of the knee at TD can depend on how 3D and the rater detect the TO during the stance phase of running.Keywords: reliability, running, sagittal plane, two dimensional
Procedia PDF Downloads 20183 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen
Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev
Abstract:
The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms
Procedia PDF Downloads 9082 Additive Manufacturing with Ceramic Filler
Authors: Irsa Wolfram, Boruch Lorenz
Abstract:
Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.Keywords: additive manufacturing, ceramic composites, complex filament, industrial application
Procedia PDF Downloads 10681 The Display of Environmental Information to Promote Energy Saving Practices: Evidence from a Massive Behavioral Platform
Authors: T. Lazzarini, M. Imbiki, P. E. Sutter, G. Borragan
Abstract:
While several strategies, such as the development of more efficient appliances, the financing of insulation programs or the rolling out of smart meters represent promising tools to reduce future energy consumption, their implementation relies on people’s decisions-actions. Likewise, engaging with consumers to reshape their behavior has shown to be another important way to reduce energy usage. For these reasons, integrating the human factor in the energy transition has become a major objective for researchers and policymakers. Digital education programs based on tangible and gamified user interfaces have become a new tool with potential effects to reduce energy consumption4. The B2020 program, developed by the firm “Économie d’Énergie SAS”, proposes a digital platform to encourage pro-environmental behavior change among employees and citizens. The platform integrates 160 eco-behaviors to help saving energy and water and reducing waste and CO2 emissions. A total of 13,146 citizens have used the tool so far to declare the range of eco-behaviors they adopt in their daily lives. The present work seeks to build on this database to identify the potential impact of adopted energy-saving behaviors (n=62) to reduce the use of energy in buildings. To this end, behaviors were classified into three categories regarding the nature of its implementation (Eco-habits: e.g., turning-off the light, Eco-actions: e.g., installing low carbon technology such as led light-bulbs and Home-Refurbishments: e.g., such as wall-insulation or double-glazed energy efficient windows). General Linear Models (GLM) disclosed the existence of a significantly higher frequency of Eco-habits when compared to the number of home-refurbishments realized by the platform users. While this might be explained in part by the high financial costs that are associated with home renovation works, it also contrasts with the up to three times larger energy-savings that can be accomplished by these means. Furthermore, multiple regression models failed to disclose the expected relationship between energy-savings and frequency of adopted eco behaviors, suggesting that energy-related practices are not necessarily driven by the correspondent energy-savings. Finally, our results also suggested that people adopting more Eco-habits and Eco-actions were more likely to engage in Home-Refurbishments. Altogether, these results fit well with a growing body of scientific research, showing that energy-related practices do not necessarily maximize utility, as postulated by traditional economic models, and suggest that other variables might be triggering them. Promoting home refurbishments could benefit from the adoption of complementary energy-saving habits and actions.Keywords: energy-saving behavior, human performance, behavioral change, energy efficiency
Procedia PDF Downloads 20080 Assessment of the Properties of Microcapsules with Different Polymeric Shells Containing a Reactive Agent for their Suitability in Thermoplastic Self-healing Materials
Authors: Małgorzata Golonka, Jadwiga Laska
Abstract:
Self-healing polymers are one of the most investigated groups of smart materials. As materials engineering has recently focused on the design, production and research of modern materials and future technologies, researchers are looking for innovations in structural, construction and coating materials. Based on available scientific articles, it can be concluded that most of the research focuses on the self-healing of cement, concrete, asphalt and anticorrosion resin coatings. In our study, a method of obtaining and testing the properties of several types of microcapsules for use in self-healing polymer materials was developed. A method to obtain microcapsules exhibiting various mechanical properties, especially compressive strength was developed. The effect was achieved by using various polymer materials to build the shell: urea-formaldehyde resin (UFR), melamine-formaldehyde resin (MFR), melamine-urea-formaldehyde resin (MUFR). Dicyclopentadiene (DCPD) was used as the core material due to the possibility of its polymerization according to the ring-opening olefin metathesis (ROMP) mechanism in the presence of a solid Grubbs catalyst showing relatively high chemical and thermal stability. The ROMP of dicyclopentadiene leads to a polymer with high impact strength, high thermal resistance, good adhesion to other materials and good chemical and environmental resistance, so it is potentially a very promising candidate for the self-healing of materials. The capsules were obtained by condensation polymerization of formaldehyde with urea, melamine or copolymerization with urea and melamine in situ in water dispersion, with different molar ratios of formaldehyde, urea and melamine. The fineness of the organic phase dispersed in water, and consequently the size of the microcapsules, was regulated by the stirring speed. In all cases, to establish such synthesis conditions as to obtain capsules with appropriate mechanical strength. The microcapsules were characterized by determining the diameters and their distribution and measuring the shell thickness using digital optical microscopy and scanning electron microscopy, as well as confirming the presence of the active substance in the core by FTIR and SEM. Compression tests were performed to determine mechanical strength of the microcapsules. The highest repeatability of microcapsule properties was obtained for UFR resin, while the MFR resin had the best mechanical properties. The encapsulation efficiency of MFR was much lower compared to UFR, though. Therefore, capsules with a MUFR shell may be the optimal solution. The chemical reaction between the active substance present in the capsule core and the catalyst placed outside the capsules was confirmed by FTIR spectroscopy. The obtained autonomous repair systems (microcapsules + catalyst) were introduced into polyethylene in the extrusion process and tested for the self-repair of the material.Keywords: autonomic self-healing system, dicyclopentadiene, melamine-urea-formaldehyde resin, microcapsules, thermoplastic materials
Procedia PDF Downloads 4579 Advancing Food System Resilience by Pseudocereals Utilization
Authors: Yevheniia Varyvoda, Douglas Taren
Abstract:
At the aggregate level, climate variability, the rising number of active violent conflicts, globalization and industrialization of agriculture, the loss in diversity of crop species, the increase in demand for agricultural production, and the adoption of healthy and sustainable dietary patterns are exacerbating factors of food system destabilization. The importance of pseudocereals to fuel and sustain resilient food systems is recognized by leading organizations working to end hunger, particularly for their critical capability to diversify livelihood portfolios and provide plant-sourced healthy nutrition in the face of systemic shocks and stresses. Amaranth, buckwheat, and quinoa are the most promising and used pseudocereals for ensuring food system resilience in the reality of climate change due to their high nutritional profile, good digestibility, palatability, medicinal value, abiotic stress tolerance, pest and disease resistance, rapid growth rate, adaptability to marginal and degraded lands, high genetic variability, low input requirements, and income generation capacity. The study provides the rationale and examples of advancing local and regional food systems' resilience by scaling up the utilization of amaranth, buckwheat, and quinoa along all components of food systems to architect indirect nutrition interventions and climate-smart approaches. Thus, this study aims to explore the drivers for ancient pseudocereal utilization, the potential resilience benefits that can be derived from using them, and the challenges and opportunities for pseudocereal utilization within the food system components. The PSALSAR framework regarding the method for conducting systematic review and meta-analysis for environmental science research was used to answer these research questions. Nevertheless, the utilization of pseudocereals has been slow for a number of reasons, namely the increased production of commercial and major staples such as maize, rice, wheat, soybean, and potato, the displacement due to pressure from imported crops, lack of knowledge about value-adding practices in food supply chain, limited technical knowledge and awareness about nutritional and health benefits, absence of marketing channels and limited access to extension services and information about resilient crops. The success of climate-resilient pathways based on pseudocereal utilization underlines the importance of co-designed activities that use modern technologies, high-value traditional knowledge of underutilized crops, and a strong acknowledgment of cultural norms to increase community-level economic and food system resilience.Keywords: resilience, pseudocereals, food system, climate change
Procedia PDF Downloads 7978 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 10177 Blockchain for the Monitoring and Reporting of Carbon Emission Trading: A Case Study on Its Possible Implementation in the Danish Energy Industry
Authors: Nkechi V. Osuji
Abstract:
The use of blockchain to address the issue of climate change is increasingly a discourse among countries, industries, and stakeholders. For a long time, the European Union (EU) has been combating the issue of climate action in industries through sustainability programs. One of such programs is the EU monitoring reporting and verification (MRV) program of the EU ETS. However, the system has some key challenges and areas for improvement, which makes it inefficient. The main objective of the research is to look at how blockchain can be used to improve the inefficiency of the EU ETS program for the Danish energy industry with a focus on its monitoring and reporting framework. Applying empirical data from 13 semi-structured expert interviews, three case studies, and literature reviews, three outcomes are presented in the study. The first is on the current conditions and challenges of monitoring and reporting CO₂ emission trading. The second is putting into consideration if blockchain is the right fit to solve these challenges and how. The third stage looks at the factors that might affect the implementation of such a system and provides recommendations to mitigate these challenges. The first stage of the findings reveals that the monitoring and reporting of CO₂ emissions is a mandatory requirement by law for all energy operators under the EU ETS program. However, most energy operators are non-compliant with the program in reality, which creates a gap and causes challenges in the monitoring and reporting of CO₂ emission trading. Other challenges the study found out are the lack of transparency, lack of standardization in CO₂ accounting, and the issue of double-counting in the current system. The second stage of the research was guided by three case studies and requirement engineering (RE) to explore these identified challenges and if blockchain is the right fit to address them. This stage of the research addressed the main research question: how can blockchain be used for monitoring and reporting CO₂ emission trading in the energy industry. Through analysis of the study data, the researcher developed a conceptual private permissioned Hyperledger blockchain and elucidated on how it can address the identified challenges. Particularly, the smart contract of blockchain was highlighted as a key feature. This is because of its ability to automate, be immutable, and digitally enforce negotiations without a middleman. These characteristics are unique in solving the issue of compliance, transparency, standardization, and double counting identified. The third stage of the research presents technological constraints and a high level of stakeholder collaboration as major factors that might affect the implementation of the proposed system. The proposed conceptual model requires high-level integration with other technologies such as the Internet of Things (IoT) and machine learning. Therefore, the study encourages future research in these areas. This is because blockchain is continually evolving its technology capabilities. As such, it remains a topic of interest in research and development for addressing climate change. Such a study is a good contribution to creating sustainable practices to solve the global climate issue.Keywords: blockchain, carbon emission trading, European Union emission trading system, monitoring and reporting
Procedia PDF Downloads 12876 Isolation, Selection and Identification of Bacteria for Bioaugmentation of Paper Mills White Water
Authors: Nada Verdel, Tomaz Rijavec, Albin Pintar, Ales Lapanje
Abstract:
Objectives: White water circuits of woodfree paper mills contain suspended, dissolved, and colloidal particles, such as cellulose, starch, paper sizings, and dyes. By closing the white water circuits, these particles start to accumulate and affect the production. Due to high amount of organic matter that scavenge radicals and adsorbs onto catalyst surfaces, treatment of white water with photocatalysis is inappropriate. The most suitable approach should be bioaugmentation-assisted bioremediation. Accordingly, objectives were: - to isolate bacteria capable of degrading organic compounds used for the papermaking process - to select the most active bacteria for bioaugmentation. Status: The state-of-the-art of bioaugmentation of pulp and paper mill effluents is mostly based on biodegradation of lignin. Whereas in white water circuits of woodfree paper mills only papermaking compounds are present. As far as one can tell from the literature, the study on degradation activities of bacteria for all possible compounds of the papermaking process is a novelty. Methodology: The main parameters of the selected white water were systematically analyzed during a period of two months. Bacteria were isolated on selective media with particular carbon source. Organic substances used as carbon source either enter white water circuits as base paper or as recycled broke. The screening of bacterial activities for starch, cellulose, latex, polyvinyl alcohol, alkyl ketene dimers, and resin acids was followed by addition of lugol. Degraders of polycyclic aromatic dyes were selected by cometabolism tests; cometabolism is simultaneous biodegradation of two compounds, in which the degradation of the second compound depends on the presence of the first. The obtained strains were identified by 16S rRNA sequencing. Findings: 335 autochthonous strains were isolated on plates with selected carbon source. The isolated strains were selected according to degradation of the particular carbon source. The ultimate degraders of cationic starch, cellulose, and sizings are Pseudomonas sp. NV-CE12-CF and Aeromonas sp. NV-RES19-BTP. The most active strains capable of degrading azo dyes are Aeromonas sp. NV-RES19-BTP and Sphingomonas sp. NV-B14-CF. Klebsiella sp. NV-Y14A-BTP degrade polycyclic aromatic direct blue 15 and also yellow dye, Agromyces sp. NV-RED15A-BF and Cellulosimicrobium sp. NV-A4-BF are specialists for whitener and Aeromonas sp. NV-RES19-BTP is general degrader of all compounds. To the white water adapted bacteria were isolated and selected according to their degradation activities for particular organic substances. Mostly isolated bacteria are specialized to lower the competition in the microbial community. Degraders of readily-biodegradable compounds do not degrade recalcitrant polycyclic aromatic dyes and vice versa. General degraders are rare.Keywords: bioaugmentation, biodegradation of azo dyes, cometabolism, smart wastewater treatment technologies
Procedia PDF Downloads 20375 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools
Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami
Abstract:
The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design
Procedia PDF Downloads 7674 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 6473 Digital Twins in the Built Environment: A Systematic Literature Review
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.Keywords: built environment, design, digital twins, literature review
Procedia PDF Downloads 8172 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 16071 Innovation and Entrepreneurship in the South of China
Authors: Federica Marangio
Abstract:
This study looks at the triangle of knowledge: research-education-innovation as growth engine of an inclusive and sustainable society, where the research is the strategic process which allows the acquisition of knowledge, innovation appraises the knowledge acquired and the education is the enabling factor of the human capital to create entrepreneurial capital. Where does Italy and China stand in the global geography of innovation? Europe is calling on a smart, inclusive and sustainable growth through a specializing process that looks at the social and economic challenges, able to understand the characteristics of specific geographic areas. It is easily questionable why it is not as simple as it looks to come up with entrepreneurial ideas in all the geographic areas. Seen that the technology plus the human capital should be the means through which is possible to innovate and contribute to the boost of innovation culture, then the young educated people can be seen as the society changing agents and it becomes clear the importance of investigating the skills and competencies that lead to innovation. By starting innovation-based activities, other countries on an international level, are able now to be part of an healthy innovative ecosystem which is the result of a strong growth policy which enables innovation. Analyzing the geography of the innovation on a global scale, comes to light that the innovative entrepreneurship is the process which portrays the competitiveness of the regions in the knowledge-based economy as strategic process able to match intellectual capital and market opportunities. The level of innovative entrepreneurship is not only the result of the endogenous growth ability of the enterprises, but also by significant relations with other enterprises, universities, other centers of education and institutions. To obtain more innovative entrepreneurship is necessary to stimulate more synergy between all these territory actors in order to create, access and value existing and new knowledge ready to be disseminate. This study focuses on individual’s lived experience and the researcher believed that she can’t understand the human actions without understanding the meaning that they attribute to their thoughts, feelings, beliefs and so given she needed to understand the deeper perspectives captured through face-to face interaction. A case study approach will contribute to the betterment of knowledge in this field. This case study will represent a picture of the innovative ecosystem and the entrepreneurial mindset as a key ingredient of endogenous growth and a must for sustainable local and regional development and social cohesion. The case study will be realized analyzing two Chinese companies. A structured set of questions will be asked in order to gain details on what generated success or failure in the different situations with the past and at the moment of the research. Everything will be recorded not to lose important information during the transcription phase. While this work is not geared toward testing a priori hypotheses, it is nevertheless useful to examine whether the projects undertaken by the companies, were stimulated by enabling factors that, as result, enhanced or hampered the local innovation culture.Keywords: Entrepreneurship, education, geography of innovation, education.
Procedia PDF Downloads 41870 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections
Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz
Abstract:
In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process
Procedia PDF Downloads 21369 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks
Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios
Abstract:
To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand
Procedia PDF Downloads 14268 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses
Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen
Abstract:
Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.Keywords: ehealth, one health, risk communication, stakeholder, zoonosis
Procedia PDF Downloads 28667 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 11766 Mobile Phones, (Dis) Empowerment and Female Headed Households: Trincomalee, Sri Lanka
Authors: S. A. Abeykoon
Abstract:
This study explores the empowerment potential of the mobile phone, the widely penetrated and greatly affordable communication technology in Sri Lanka, for female heads of households in Trincomalee District, Sri Lanka-an area recovering from the effects of a 30-year civil war and the 2004 Boxing Day Tsunami. It also investigates how the use of mobile phones by these women is shaped and appropriated by the gendered power relations and inequalities in their respective communities and by their socio-economic factors and demographic characteristics. This qualitative study is based on the epistemology of constructionism; interpretivist, functionalist and critical theory approaches; and the process of action research. The data collection was conducted from September 2014 to November 2014 in two Divisional Secretaries of the Trincomalee District, Sri Lanka. A total of 30 semi-structured depth interviews and six focus groups with the female heads of households of Sinhalese, Tamil and Muslim ethnicities were conducted using purposive, representative and snowball sampling methods. The Grounded theory method was used to analyze transcribed interviews, focus group discussions and field notes that were coded and categorized in accordance with the research questions and the theoretical framework of the study. The findings of the study indicated that the mobile phone has mainly enabled the participants to balance their income earning activities and family responsibilities and has been useful in maintaining their family and social relationships, occupational duties and in making decisions. Thus, it provided them a higher level of security, safety, reassurance and self-confidence in carrying out their daily activities. They also practiced innovative strategies for the effective and efficient use of their mobile expenses. Although participants whose husbands or relatives have migrated were more tended to use smart phones, mobile literacy level of the majority of the participants was at a lower level limited to making and receiving calls and using SMS (Short Message Service) services. However, their interaction with the mobile phone was significantly shaped by the gendered power relations and their multiple identities based on their ethnicity, religion, class, education, profession and age. Almost all the participants were precautious of giving their mobile numbers to and have been harassed with ‘nuisance calls’ from men. For many, ownership and use of their mobile phone was shaped and influenced by their children and migrated husbands. Although these practices limit their use of the technology, there were many instances that they challenged these gendered harassments. While man-made and natural destructions have disempowered and victimized the women in the Sri Lankan society, they have also liberated women making them stronger and transforming their agency and traditional gender roles. Therefore, their present position in society is reflected in their mobile phone use as they assist such women to be more self-reliant and liberated, yet making them disempowered at some time.Keywords: mobile phone, gender power relations, empowerment, female heads of households
Procedia PDF Downloads 33665 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 38764 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)
Authors: Eric Pla Erra, Mariana Jimenez Martinez
Abstract:
While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)
Procedia PDF Downloads 10563 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin
Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng
Abstract:
The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin
Procedia PDF Downloads 76