Search results for: finished volumes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 599

Search results for: finished volumes

29 Digital Transformation in Fashion System Design: Tools and Opportunities

Authors: Margherita Tufarelli, Leonardo Giliberti, Elena Pucci

Abstract:

The fashion industry's interest in virtuality is linked, on the one hand, to the emotional and immersive possibilities of digital resources and the resulting languages and, on the other, to the greater efficiency that can be achieved throughout the value chain. The interaction between digital innovation and deep-rooted manufacturing traditions today translates into a paradigm shift for the entire fashion industry where, for example, the traditional values of industrial secrecy and know-how give way to experimentation in an open as well as participatory way, and the complete emancipation of virtual reality from actual 'reality'. The contribution aims to investigate the theme of digitisation in the Italian fashion industry, analysing its opportunities and the criticalities that have hindered its diffusion. There are two reasons why the most common approach in the fashion sector is still analogue: (i) the fashion product lives in close contact with the human body, so the sensory perception of materials plays a central role in both the use and the design of the product, but current technology is not able to restore the sense of touch; (ii) volumes are obtained by stitching flat surfaces that once assembled, given the flexibility of the material, can assume almost infinite configurations. Managing the fit and styling of virtual garments involves a wide range of factors, including mechanical simulation, collision detection, and user interface techniques for garment creation. After briefly reviewing some of the salient historical milestones in the resolution of problems related to the digital simulation of deformable materials and the user interface for the procedures for the realisation of the clothing system, the paper will describe the operation and possibilities offered today by the latest generation of specialised software. Parametric avatars and digital sartorial approach; drawing tools optimised for pattern making; materials both from the point of view of simulated physical behaviour and of aesthetic performance, tools for checking wearability, renderings, but also tools and procedures useful to companies both for dialogue with prototyping software and machinery and for managing the archive and the variants to be made. The article demonstrates how developments in technology and digital procedures now make it possible to intervene in different stages of design in the fashion industry. An integrated and additive process in which the constructed 3D models are usable both in the prototyping and communication of physical products and in the possible exclusively digital uses of 3D models in the new generation of virtual spaces. Mastering such tools requires the acquisition of specific digital skills and, at the same time, traditional skills for the design of the clothing system, but the benefits are manifold and applicable to different business dimensions. We are only at the beginning of the global digital transformation: the emergence of new professional figures and design dynamics leaves room for imagination, but in addition to applying digital tools to traditional procedures, traditional fashion know-how needs to be transferred into emerging digital practices to ensure the continuity of the technical-cultural heritage beyond the transformation.

Keywords: digital fashion, digital technology and couture, digital fashion communication, 3D garment simulation

Procedia PDF Downloads 74
28 Membrane Technologies for Obtaining Bioactive Fractions from Blood Main Protein: An Exploratory Study for Industrial Application

Authors: Fatima Arrutia, Francisco Amador Riera

Abstract:

The meat industry generates large volumes of blood as a result of meat processing. Several industrial procedures have been implemented in order to treat this by-product, but are focused on the production of low-value products, and in many cases, blood is simply discarded as waste. Besides, in addition to economic interests, there is an environmental concern due to bloodborne pathogens and other chemical contaminants found in blood. Consequently, there is a dire need to find extensive uses for blood that can be both applicable to industrial scale and able to yield high value-added products. Blood has been recognized as an important source of protein. The main blood serum protein in mammals is serum albumin. One of the top trends in food market is functional foods. Among them, bioactive peptides can be obtained from protein sources by microbiological fermentation or enzymatic and chemical hydrolysis. Bioactive peptides are short amino acid sequences that can have a positive impact on health when administered. The main drawback for bioactive peptide production is the high cost of the isolation, purification and characterization techniques (such as chromatography and mass spectrometry) that make unaffordable the scale-up. On the other hand, membrane technologies are very suitable to apply to the industry because they offer a very easy scale-up and are low-cost technologies, compared to other traditional separation methods. In this work, the possibility of obtaining bioactive peptide fractions from serum albumin by means of a simple procedure of only 2 steps (hydrolysis and membrane filtration) was evaluated, as an exploratory study for possible industrial application. The methodology used in this work was, firstly, a tryptic hydrolysis of serum albumin in order to release the peptides from the protein. The protein was previously subjected to a thermal treatment in order to enhance the enzyme cleavage and thus the peptide yield. Then, the obtained hydrolysate was filtered through a nanofiltration/ultrafiltration flat rig at three different pH values with two different membrane materials, so as to compare membrane performance. The corresponding permeates were analyzed by liquid chromatography-tandem mass spectrometry technology in order to obtain the peptide sequences present in each permeate. Finally, different concentrations of every permeate were evaluated for their in vitro antihypertensive and antioxidant activities though ACE-inhibition and DPPH radical scavenging tests. The hydrolysis process with the previous thermal treatment allowed achieving a degree of hydrolysis of the 49.66% of the maximum possible. It was found that peptides were best transmitted to the permeate stream at pH values that corresponded to their isoelectric points. Best selectivity between peptide groups was achieved at basic pH values. Differences in peptide content were found between membranes and also between pH values for the same membrane. The antioxidant activity of all permeates was high compared with the control only for the highest dose. However, antihypertensive activity was best for intermediate concentrations, rather than higher or lower doses. Therefore, although differences between them, all permeates were promising regarding antihypertensive and antioxidant properties.

Keywords: bioactive peptides, bovine serum albumin, hydrolysis, membrane filtration

Procedia PDF Downloads 200
27 Industrial Waste to Energy Technology: Engineering Biowaste as High Potential Anode Electrode for Application in Lithium-Ion Batteries

Authors: Pejman Salimi, Sebastiano Tieuli, Somayeh Taghavi, Michela Signoretto, Remo Proietti Zaccaria

Abstract:

Increasing the growth of industrial waste due to the large quantities of production leads to numerous environmental and economic challenges, such as climate change, soil and water contamination, human disease, etc. Energy recovery of waste can be applied to produce heat or electricity. This strategy allows for the reduction of energy produced using coal or other fuels and directly reduces greenhouse gas emissions. Among different factories, leather manufacturing plays a very important role in the whole world from the socio-economic point of view. The leather industry plays a very important role in our society from a socio-economic point of view. Even though the leather industry uses a by-product from the meat industry as raw material, it is considered as an activity demanding integrated prevention and control of pollution. Along the entire process from raw skins/hides to finished leather, a huge amount of solid and water waste is generated. Solid wastes include fleshings, raw trimmings, shavings, buffing dust, etc. One of the most abundant solid wastes generated throughout leather tanning is shaving waste. Leather shaving is a mechanical process that aims at reducing the tanned skin to a specific thickness before tanning and finishing. This product consists mainly of collagen and tanning agent. At present, most of the world's leather processing is chrome-tanned based. Consequently, large amounts of chromium-containing shaving wastes need to be treated. The major concern about the management of this kind of solid waste is ascribed to chrome content, which makes the conventional disposal methods, such as landfilling and incineration, not practicable. Therefore, many efforts have been developed in recent decades to promote eco-friendly/alternative leather production and more effective waste management. Herein, shaving waste resulting from metal-free tanning technology is proposed as low-cost precursors for the preparation of carbon material as anodes for lithium-ion batteries (LIBs). In line with the philosophy of a reduced environmental impact, for preparing fully sustainable and environmentally friendly LIBs anodes, deionized water and carboxymethyl cellulose (CMC) have been used as alternatives to toxic/teratogen N-methyl-2- pyrrolidone (NMP) and to biologically hazardous Polyvinylidene fluoride (PVdF), respectively. Furthermore, going towards the reduced cost, we employed water solvent and fluoride-free bio-derived CMC binder (as an alternative to NMP and PVdF, respectively) together with LiFePO₄ (LFP) when a full cell was considered. These actions make closer to the 2030 goal of having green LIBs at 100 $ kW h⁻¹. Besides, the preparation of the water-based electrodes does not need a controlled environment and due to the higher vapour pressure of water in comparison with NMP, the water-based electrode drying is much faster. This aspect determines an important consequence, namely a reduced energy consumption for the electrode preparation. The electrode derived from leather waste demonstrated a discharge capacity of 735 mAh g⁻¹ after 1000 charge and discharge cycles at 0.5 A g⁻¹. This promising performance is ascribed to the synergistic effect of defects, interlayer spacing, heteroatoms-doped (N, O, and S), high specific surface area, and hierarchical micro/mesopore structure of the biochar. Interestingly, these features of activated biochars derived from the leather industry open the way for possible applications in other EESDs as well.

Keywords: biowaste, lithium-ion batteries, physical activation, waste management, leather industry

Procedia PDF Downloads 171
26 Mangroves in the Douala Area, Cameroon: The Challenges of Open Access Resources for Forest Governance

Authors: Bissonnette Jean-François, Dossa Fabrice

Abstract:

The project focuses on analyzing the spatial and temporal evolution of mangrove forest ecosystems near the city of Douala, Cameroon, in response to increasing human and environmental pressures. The selected study area, located in the Wouri River estuary, has a unique combination of economic importance, and ecological prominence. The study included valuable insights by conducting semi-structured interviews with resource operators and local officials. The thorough analysis of socio-economic data, farmer surveys, and satellite-derived information was carried out utilizing quantitative approaches in Excel and SPSS. Simultaneously, qualitative data was subjected to rigorous classification and correlation with other sources. The use of ArcGIS and CorelDraw facilitated the visual representation of the gradual changes seen in various land cover classifications. The research reveals complex processes that characterize mangrove ecosystems on Manoka and Cape Cameroon Islands. The lack of regulations in urbanization and the continuous growth of infrastructure have led to a significant increase in land conversion, causing negative impacts on natural landscapes and forests. The repeated instances of flooding and coastal erosion have further shaped landscape alterations, fostering the proliferation of water and mudflat areas. The unregulated use of mangrove resources is a significant factor in the degradation of these ecosystems. Activities including the use of wood for smoking and fishing, together with the coastal pollution resulting from the absence of waste collection, have had a significant influence. In addition, forest operators contribute to the degradation of vegetation, hence exacerbating the harmful impact of invasive species on the ecosystem. Strategic interventions are necessary to guarantee the sustainable management of these ecosystems. The proposals include advocating for sustainable wood exploitation techniques, using appropriate techniques, along with regeneration, and enforcing rules to prevent wood overexploitation. By implementing these measures, the ecological balance can be preserved, safeguarding the long-term viability of these precious ecosystems. On a conceptual level, this paper uses the framework developed by Elinor Ostrom and her colleagues to investigate the consequences of open access resources, where local actors have not been able to enforce measures to prevent overexploitation of mangrove wood resources. Governmental authorities have demonstrated limited capacity to enforce sustainable management of wood resources and have not been able to establish effective relationships with local fishing communities and with communities involved in the purchase of wood. As a result, wood resources in the mangrove areas remain largely accessible, while authorities do not monitor wood volumes extracted nor methods of exploitation. There have only been limited and punctual attempts at forest restoration with no significant consequence on mangrove forests dynamics.

Keywords: Mangroves, forest management, governance, open access resources, Cameroon

Procedia PDF Downloads 63
25 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 234
24 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations

Authors: Oleg Kabantsev, Karomatullo Umarov

Abstract:

The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1

Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis

Procedia PDF Downloads 207
23 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock

Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer

Abstract:

Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.

Keywords: redensification, SME, urban development, wood building system

Procedia PDF Downloads 111
22 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 95
21 The Study of Fine and Nanoscale Gold in the Ores of Primary Deposits and Gold-Bearing Placers of Kazakhstan

Authors: Omarova Gulnara, Assubayeva Saltanat, Tugambay Symbat, Bulegenov Kanat

Abstract:

The article discusses the problem of developing a methodology for studying thin and nanoscale gold in ores and placers of primary deposits, which will allow us to develop schemes for revealing dispersed gold inclusions and thus improve its recovery rate to increase the gold reserves of the Republic of Kazakhstan. The type of studied gold, is characterized by a number of features. In connection with this, the conditions of its concentration and distribution in ore bodies and formations, as well as the possibility of reliably determining it by "traditional" methods, differ significantly from that of fine gold (less than 0.25 microns) and even more so from that of larger grains. The mineral composition of rocks (metasomatites) and gold ore and the mineralization associated with them were studied in detail on the Kalba ore field in Kazakhstan. Mineralized zones were identified, and samples were taken from them for analytical studies. The research revealed paragenetic relationships of newly formed mineral formations at the nanoscale, which makes it possible to clarify the conditions for the formation of deposits with a particular type of mineralization. This will provide significant assistance in developing a scheme for study. Typomorphic features of gold were revealed, and mechanisms of formation and aggregation of gold nanoparticles were proposed. The presence of a large number of particles isolated at the laboratory stage from concentrates of gravitational enrichment can serve as an indicator of the presence of even smaller particles in the object. Even the most advanced devices based on gravitational methods for gold concentration provide extraction of metal at a level of around 50%, while pulverized metal is extracted much worse, and gold of less than 1 micron size is extracted at only a few percent. Therefore, when particles of gold smaller than 10 microns are detected, their actual numbers may be significantly higher than expected. In particular, at the studied sites, enrichment of slurry and samples with volumes up to 1 m³ was carried out using a screw lock or separator to produce a final concentrate weighing up to several kilograms. Free gold particles were extracted from the concentrates in the laboratory using a number of processes (magnetic and electromagnetic separation, washing with bromoform in a cup to obtain an ultracontentrate, etc.) and examined under electron microscopes to investigate the nature of their surface and chemical composition. The main result of the study was the detection of gold nanoparticles located on the surface of loose metal grains. The most characteristic forms of gold secretions are individual nanoparticles and aggregates of different configurations. Sometimes, aggregates form solid dense films, deposits, and crusts, all of which are confined to the negative forms of the nano- and microrelief on the surfaces of golden. The results will provide significant knowledge about the prevalence and conditions for the distribution of fine and nanoscale gold in Kazakhstan deposits, as well as the development of methods for studying it, which will minimize losses of this type of gold during extraction. Acknowledgments: This publication has been produced within the framework of the Grant "Development of methodology for studying fine and nanoscale gold in ores of primary deposits, placers and products of their processing" (АР23485052, №235/GF24-26).

Keywords: electron microscopy, microminerology, placers, thin and nanoscale gold

Procedia PDF Downloads 22
20 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 122
19 Exploiting the Tumour Microenvironment in Order to Optimise Sonodynamic Therapy for Cancer

Authors: Maryam Mohammad Hadi, Heather Nesbitt, Hamzah Masood, Hashim Ahmed, Mark Emberton, John Callan, Alexander MacRobert, Anthony McHale, Nikolitsa Nomikou

Abstract:

Sonodynamic therapy (SDT) utilises ultrasound in combination with sensitizers, such as porphyrins, for the production of cytotoxic reactive oxygen species (ROS) and the confined ablation of tumours. Ultrasound can be applied locally, and the acoustic waves, at frequencies between 0.5-2 MHz, are transmitted efficiently through tissue. SDT does not require highly toxic agents, and the cytotoxic effect only occurs upon ultrasound exposure at the site of the lesion. Therefore, this approach is not associated with adverse side effects. Further highlighting the benefits of SDT, no cancer cell population has shown resistance to therapy-triggered ROS production or their cytotoxic effects. This is particularly important, given the as yet unresolved issues of radiation and chemo-resistance, to the authors’ best knowledge. Another potential future benefit of this approach – considering its non-thermal mechanism of action – is its possible role as an adjuvant to immunotherapy. Substantial pre-clinical studies have demonstrated the efficacy and targeting capability of this therapeutic approach. However, SDT has yet to be fully characterised and appropriately exploited for the treatment of cancer. In this study, a formulation based on multistimulus-responsive sensitizer-containing nanoparticles that can accumulate in advanced prostate tumours and increase the therapeutic efficacy of SDT has been developed. The formulation is based on a polyglutamate-tyrosine (PGATyr) co-polymer carrying hematoporphyrin. The efficacy of SDT in this study was demonstrated using prostate cancer as the translational exemplar. The formulation was designed to respond to the microenvironment of advanced prostate tumours, such as the overexpression of the proteolytic enzymes, cathepsin-B and prostate-specific membrane antigen (PSMA), that can degrade the nanoparticles, reduce their size, improving both diffusions throughout the tumour mass and cellular uptake. The therapeutic modality was initially tested in vitro using LNCaP and PC3 cells as target cell lines. The SDT efficacy was also examined in vivo, using male SCID mice bearing LNCaP subcutaneous tumours. We have demonstrated that the PGATyr co-polymer is digested by cathepsin B and that digestion of the formulation by cathepsin-B, at tumour-mimicking conditions (acidic pH), leads to decreased nanoparticle size and subsequent increased cellular uptake. Sonodynamic treatment, at both normoxic and hypoxic conditions, demonstrated ultrasound-induced cytotoxic effects only for the nanoparticle-treated prostate cancer cells, while the toxicity of the formulation in the absence of ultrasound was minimal. Our in vivo studies in immunodeficient mice, using the hematoporphyrin-containing PGATyr nanoparticles for SDT, showed a 50% decrease in LNCaP tumour volumes within 24h, following IV administration of a single dose. No adverse effects were recorded, and body weight was stable. The results described in this study clearly demonstrate the promise of SDT to revolutionize cancer treatment. It emphasizes the potential of this therapeutic modality as a fist line treatment or in combination treatment for the elimination or downstaging of difficult to treat cancers, such as prostate, pancreatic, and advanced colorectal cancer.

Keywords: sonodynamic therapy, nanoparticles, tumour ablation, ultrasound

Procedia PDF Downloads 139
18 Understanding the Perceived Barriers and Facilitators to Exercise Participation in the Workplace

Authors: Jayden R. Hunter, Brett A. Gordon, Stephen R. Bird, Amanda C. Benson

Abstract:

The World Health Organisation recognises the workplace as an important setting for exercise promotion, with potential benefits including improved employee health and fitness, and reduced worker absenteeism and presenteeism. Despite these potential benefits to both employee and employer, there is a lack of evidence supporting the long-term effectiveness of workplace exercise programs. There is, therefore, a need for better-informed programs that cater to employee exercise preferences. Specifically, workplace exercise programs should address any time, motivation, internal and external barriers to participation reported by sub-groups of employees. This study sought to compare exercise participation to perceived barriers and facilitators to workplace exercise engagement of university employees. This information is needed to design and implement wider-reaching programs aiming to maximise long-term employee exercise adherence and subsequent health, fitness and productivity benefits. An online survey was advertised at an Australian university with the potential to reach 3,104 full-time employees. Along with exercise participation (International physical activity questionnaire) and behaviour (stage of behaviour change in relation to physical activity questionnaire), perceived barriers (corporate exercise barriers scale) and facilitators to workplace exercise participation were identified. The survey response rate was 8.1% (252 full-time employees; 95% white-collar; 60% female; 79.4% aged 30–59 years; 57% professional and 38% academic). Most employees reported meeting (43.7%) or exceeding (42.9%) exercise guidelines over the previous week (i.e. ⩾30 min of moderate-intensity exercise on most days or ⩾ 25 min of vigorous-intensity exercise on at least three days per week). Reported exercise behaviour over the previous six months showed that 64.7% of employees were in maintenance, 8.3% were in action, 10.9% were in preparation, 12.4% were in contemplation, and 3.8% were in the pre-contemplation stage of change. Perceived barriers towards workplace exercise participation were significantly higher in employees not attaining weekly exercise guidelines compared to employees meeting or exceeding guidelines, including a lack of time or reduced motivation (p < 0.001; partial eta squared = 0.24 (large effect)), exercise attitude (p < 0.05; partial eta squared = 0.04 (small effect)), internal (p < 0.01; partial eta squared = 0.10 (moderate effect)) and external (p < 0.01; partial eta squared = 0.06 (moderate effect)) barriers. The most frequently reported exercise facilitators were personal training (particularly for insufficiently active employees; 33%) and group exercise classes (20%). The most frequently cited preferred modes of exercise were walking (70%), swimming (50%), gym (48%), and cycling (45%). In conclusion, providing additional means of support such as individualised gym, swimming and cycling programs with personal supervision and guidance may be particularly useful for employees not meeting recommended moderate-vigorous volumes of exercise, to help overcome reported exercise barriers in order to improve participation, health, and fitness. While individual biopsychosocial factors should be considered when making recommendations for interventions, the specific barriers and facilitators to workplace exercise participation identified by this study can inform the development of workplace exercise programs aiming to broaden employee engagement and promote greater ongoing exercise adherence. This is especially important for the uptake of less active employees who perceive greater barriers to workplace exercise participation than their more active colleagues.

Keywords: exercise barriers, exercise facilitators, physical activity, workplace health

Procedia PDF Downloads 146
17 Design of Experiment for Optimizing Immunoassay Microarray Printing

Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern

Abstract:

Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).

Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing

Procedia PDF Downloads 183
16 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 61
15 Microplastic Concentrations and Fluxes in Urban Compartments: A Systemic Approach at the Scale of the Paris Megacity

Authors: Rachid Dris, Robin Treilles, Max Beaurepaire, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Johnny Gasperi, Bruno Tassin

Abstract:

Microplastic sources and fluxes in urban catchments are only poorly studied. Most often, the approaches taken focus on a single source and only carry out a description of the contamination levels and type (shape, size, polymers). In order to gain an improved knowledge of microplastic inputs at urban scales, estimating and comparing various fluxes is necessary. The Laboratoire Eau, Environnement et Systèmes Urbains (LEESU), the Laboratoire Eau Environnement (LEE) and the SIAAP (Service public de l’assainissement francilien) initiated several projects to investigate different urban sources and flows of microplastics. A systemic approach is undertaken at the scale of Paris Megacity, and several compartments are considered, including atmospheric fallout, wastewater treatments plants, runoff and combined sewer overflows. These investigations are carried out within the Limnoplast and OPUR projects. Atmospheric fallout was sampled during consecutive periods ranging from 2 to 3 weeks with a stainless-steel funnel. Both wet and dry periods were considered. Different treatment steps were sampled in 2 wastewater treatment plants (Seine-Amont for activated sludge and Seine-Centre for biofiltration) of the SIAAP, including sludge samples. Microplastics were also investigated in combined sewer overflows as well as in stormwater at the outlet suburban catchment (Sucy-en-Brie, France) during four rain events. Samples are treated using hydroperoxide digestion (H₂O₂ 30 %) in order to reduce organic material. Microplastics are then extracted from the samples with a density separation step using NaI (d=1.6 g.cm⁻³). Samples are filtered on metallic filters with a porosity of 14 µm between steps to separate them from the solutions (H₂O₂ and NaI). The last filtration was carried out on alumina filters. Infrared mapping analysis (using a micro-FTIR with an MCT detector) is performed on each alumina filter. The resulting maps are analyzed using a microplastic analysis software simple, developed by Aalborg University, Denmark and Alfred Wegener Institute, Germany. Blanks were systematically carried out to consider sample contamination. This presentation aims at synthesizing the data found in the various projects. In order to carry out a systemic approach and compare the various inputs, all the data were converted into annual microplastic fluxes (number of microplastics per year), and extrapolated to the Parisian agglomeration. PP, PE and alkyd are the most prevalent polymers found in storm water samples. Rain intensity and microplastic concentrations did not show any clear correlation. Considering the runoff volumes and the impervious surface area of the studied catchment, a flux of 4*107–9*107 MPs.yr⁻¹.ha⁻¹ was estimated. Samples of wastewater treatment plants and atmospheric fallout are currently being analyzed in order to finalize this assessment. The representativeness of such samplings and uncertainties related to the extrapolations will be discussed and gaps in knowledge will be identified. The data provided by such an approach will help to prioritize future research as well as policy efforts.

Keywords: microplastics, atmosphere, wastewater, urban runoff, Paris megacity, urban waters

Procedia PDF Downloads 181
14 Theoretical Modelling of Molecular Mechanisms in Stimuli-Responsive Polymers

Authors: Catherine Vasnetsov, Victor Vasnetsov

Abstract:

Context: Thermo-responsive polymers are materials that undergo significant changes in their physical properties in response to temperature changes. These polymers have gained significant attention in research due to their potential applications in various industries and medicine. However, the molecular mechanisms underlying their behavior are not well understood, particularly in relation to cosolvency, which is crucial for practical applications. Research Aim: This study aimed to theoretically investigate the phenomenon of cosolvency in long-chain polymers using the Flory-Huggins statistical-mechanical framework. The main objective was to understand the interactions between the polymer, solvent, and cosolvent under different conditions. Methodology: The research employed a combination of Monte Carlo computer simulations and advanced machine-learning methods. The Flory-Huggins mean field theory was used as the basis for the simulations. Spinodal graphs and ternary plots were utilized to develop an initial computer model for predicting polymer behavior. Molecular dynamic simulations were conducted to mimic real-life polymer systems. Machine learning techniques were incorporated to enhance the accuracy and reliability of the simulations. Findings: The simulations revealed that the addition of very low or very high volumes of cosolvent molecules resulted in smaller radii of gyration for the polymer, indicating poor miscibility. However, intermediate volume fractions of cosolvent led to higher radii of gyration, suggesting improved miscibility. These findings provide a possible microscopic explanation for the cosolvency phenomenon in polymer systems. Theoretical Importance: This research contributes to a better understanding of the behavior of thermo-responsive polymers and the role of cosolvency. The findings provide insights into the molecular mechanisms underlying cosolvency and offer specific predictions for future experimental investigations. The study also presents a more rigorous analysis of the Flory-Huggins free energy theory in the context of polymer systems. Data Collection and Analysis Procedures: The data for this study was collected through Monte Carlo computer simulations and molecular dynamic simulations. The interactions between the polymer, solvent, and cosolvent were analyzed using the Flory-Huggins mean field theory. Machine learning techniques were employed to enhance the accuracy of the simulations. The collected data was then analyzed to determine the impact of cosolvent volume fractions on the radii of gyration of the polymer. Question Addressed: The research addressed the question of how cosolvency affects the behavior of long-chain polymers. Specifically, the study aimed to investigate the interactions between the polymer, solvent, and cosolvent under different volume fractions and understand the resulting changes in the radii of gyration. Conclusion: In conclusion, this study utilized theoretical modeling and computer simulations to investigate the phenomenon of cosolvency in long-chain polymers. The findings suggest that moderate cosolvent volume fractions can lead to improved miscibility, as indicated by higher radii of gyration. These insights contribute to a better understanding of the molecular mechanisms underlying cosolvency in polymer systems and provide predictions for future experimental studies. The research also enhances the theoretical analysis of the Flory-Huggins free energy theory.

Keywords: molecular modelling, flory-huggins, cosolvency, stimuli-responsive polymers

Procedia PDF Downloads 70
13 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 117
12 Impact of Marangoni Stress and Mobile Surface Charge on Electrokinetics of Ionic Liquids Over Hydrophobic Surfaces

Authors: Somnath Bhattacharyya

Abstract:

The mobile adsorbed surface charge on hydrophobic surfaces can modify the velocity slip condition as well as create a Marangoni stress at the interface. The functionalized hydrophobic walls of micro/nanopores, e.g., graphene nanochannels, may possess physio-sorbed ions. The lateral mobility of the physisorbed absorbed ions creates a friction force as well as an electric force, leading to a modification in the velocity slip condition at the hydrophobic surface. In addition, the non-uniform distribution of these surface ions creates a surface tension gradient, leading to a Marangoni stress. The impact of the mobile surface charge on streaming potential and electrochemical energy conversion efficiency in a pressure-driven flow of ionized liquid through the nanopore is addressed. Also, enhanced electro-osmotic flow through the hydrophobic nanochannel is also analyzed. The mean-filed electrokinetic model is modified to take into account the short-range non-electrostatic steric interactions and the long-range Coulomb correlations. The steric interaction is modeled by considering the ions as charged hard spheres of finite radius suspended in the electrolyte medium. The electrochemical potential is modified by including the volume exclusion effect, which is modeled based on the BMCSL equation of state. The electrostatic correlation is accounted for in the ionic self-energy. The extremal of the self-energy leads to a fourth-order Poisson equation for the electric field. The ion transport is governed by the modified Nernst-Planck equation, which includes the ion steric interactions; born force arises due to the spatial variation of the dielectric permittivity and the dielectrophoretic force on the hydrated ions. This ion transport equation is coupled with the Navier-Stokes equation describing the flow of the ionized fluid and the 3fourth-order Poisson equation for the electric field. We numerically solve the coupled set of nonlinear governing equations along with the prescribed boundary conditions by adopting a control volume approach over a staggered grid arrangement. In the staggered grid arrangements, velocity components are stored on the midpoint of the cell faces to which they are normal, whereas the remaining scalar variables are stored at the center of each cell. The convection and electromigration terms are discretized at each interface of the control volumes using the total variation diminishing (TVD) approach to capture the strong convection resulting from the highly enhanced fluid flow due to the modified model. In order to link pressure to the continuity equation, we adopt a pressure correction-based iterative SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm, in which the discretized continuity equation is converted to a Poisson equation involving pressure correction terms. Our results show that the physisorbed ions on a hydrophobic surface create an enhanced slip velocity when streaming potential, which enhances the convection current. However, the electroosmotic flow attenuates due to the mobile surface ions.

Keywords: microfluidics, electroosmosis, streaming potential, electrostatic correlation, finite sized ions

Procedia PDF Downloads 72
11 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 78
10 Governance Challenges for the Management of Water Resources in Agriculture: The Italian Way

Authors: Silvia Baralla, Raffaella Zucaro, Romina Lorenzetti

Abstract:

Water management needs to cope with economic, societal, and environmental changes. This could be guaranteed through 'shifting from government to governance'. In the last decades, it was applied in Europe through and within important legislative pillars (Water Framework Directive and Common Agricultural Policy) and their measures focused on resilience and adaptation to climate change, with particular attention to the creation of synergies among policies and all the actors involved at different levels. Within the climate change context, the agricultural sector can play, through sustainable water management, a leading role for climate-resilient growth and environmental integrity. A recent analysis on the water management governance of different countries identified some common gaps dealing with administrative, policy, information, capacity building, funding, objective, and accountability. The ability of a country to fill these gaps is an essential requirement to make some of the changes requested by Europe, in particular the improvement of the agro-ecosystem resilience to the effect of climatic change, supporting green and digital transitions, and sustainable water use. This research aims to contribute in sharing examples of water governances and related advantages useful to fill the highlighted gaps. Italy has developed a strong and exhaustive model of water governance in order to react with strategic and synergic actions since it is one of the European countries most threatened by climate change and its extreme events (drought, floods). In particular, the Italian water governance model was able to overcome several gaps, specifically as concerns the water use in agriculture, adopting strategies as a systemic/integrated approach, the stakeholder engagement, capacity building, the improvement of planning and monitoring ability, and an adaptive/resilient strategy for funding activities. They were carried out, putting in place regulatory, structural, and management actions. Regulatory actions include both the institution of technical committees grouping together water decision-makers and the elaboration of operative manuals and guidelines by means of a participative and cross-cutting approach. Structural actions deal with the funding of interventions within European and national funds according to the principles of coherence and complementarity. Finally, management actions regard the introduction of operational tools to support decision-makers in order to improve planning and monitoring ability. In particular, two cross-functional and interoperable web databases were introduced: SIGRIAN (National Information System for Water Resources Management in Agriculture) and DANIA (National Database of Investments for Irrigation and the Environment). Their interconnection allows to support sustainable investments, taking into account the compliance about irrigation volumes quantified in SIGRIAN, ensuring a high level of attention on water saving, and monitoring the efficiency of funding. Main positive results from the Italian water governance model deal with a synergic and coordinated work at the national, regional, and local level among institutions, the transparency on water use in agriculture, a deeper understanding from the stakeholder side of the importance of their roles and of their own potential benefits and the capacity to guarantee continuity to this model, through a sensitization process and the combined use of management operational tools.

Keywords: agricultural sustainability, governance model, water management, water policies

Procedia PDF Downloads 117
9 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography

Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai

Abstract:

Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.

Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics

Procedia PDF Downloads 97
8 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 96
7 Experimental Study on Granulated Steel Slag as an Alternative to River Sand

Authors: K. Raghu, M. N. Vathhsala, Naveen Aradya, Sharth

Abstract:

River sand is the most preferred fine aggregate for mortar and concrete. River sand is a product of natural weathering of rocks over a period of millions of years and is mined from river beds. Sand mining has disastrous environmental consequences. The excessive mining of river bed is creating an ecological imbalance. This has lead to have restrictions imposed by ministry of environment on sand mining. Driven by the acute need for sand, stone dust or manufactured sand prepared from the crushing and screening of coarse aggregate is being used as sand in the recent past. However manufactured sand is also a natural material and has quarrying and quality issues. To reduce the burden on the environment, alternative materials to be used as fine aggregates are being extensively investigated all over the world. Looking to the quantum of requirements, quality and properties there has been a global consensus on a material – Granulated slags. Granulated slag has been proven as a suitable material for replacing natural sand / crushed fine aggregates. In developed countries, the use of granulated slag as fine aggregate to replace natural sand is well established and is in regular practice. In the present paper Granulated slag has been experimented for usage in mortar. Slags are the main by-products generated during iron and steel production in the steel industry. Over the past decades, the steel production has increased and, consequently, the higher volumes of by-products and residues generated which have driven to the reuse of these materials in an increasingly efficient way. In recent years new technologies have been developed to improve the recovery rates of slags. Increase of slags recovery and use in different fields of applications like cement making, construction and fertilizers help in preserving natural resources. In addition to the environment protection, these practices produced economic benefits, by providing sustainable solutions that can allow the steel industry to achieve its ambitious targets of “zero waste” in coming years. Slags are generated at two different stages of steel production, iron making and steel making known as BF(Blast Furnace) slag and steel slag respectively. The slagging agent or fluxes, such as lime stone, dolomite and quartzite added into BF or steel making furnaces in order to remove impurities from ore, scrap and other ferrous charges during smelting. The slag formation is the result of a complex series of physical and chemical reactions between the non-metallic charge(lime stone, dolomite, fluxes), the energy sources(coal, coke, oxygen, etc.) and refractory materials. Because of the high temperatures (about 15000 C) during their generation, slags do not contain any organic substances. Due to the fact that slags are lighter than the liquid metal, they float and get easily removed. The slags protect the metal bath from atmosphere and maintain temperature through a kind of liquid formation. These slags are in liquid state and solidified in air after dumping in the pit or granulated by impinging water systems. Generally, BF slags are granulated and used in cement making due to its high cementious properties, and steel slags are mostly dumped due to unfavourable physio-chemical conditions. The increasing dump of steel slag not only occupies a plenty of land but also wastes resources and can potentially have an impact on the environment due to water pollution. Since BF slag contains little Fe and can be used directly. BF slag has found a wide application, such as cement production, road construction, Civil Engineering work, fertilizer production, landfill daily cover, soil reclamation, prior to its application outside the iron and steel making process.

Keywords: steel slag, river sand, granulated slag, environmental

Procedia PDF Downloads 245
6 Successful Optimization of a Shallow Marginal Offshore Field and Its Applications

Authors: Kumar Satyam Das, Murali Raghunathan

Abstract:

This note discusses the feasibility of field development of a challenging shallow offshore field in South East Asia and how its learnings can be applied to marginal field development across the world especially developing marginal fields in this low oil price world. The field was found to be economically challenging even during high oil prices and the project was put on hold. Shell started development study with the aim to significantly reduce cost through competitively scoping and revive stranded projects. The proposed strategy to achieve this involved Improve Per platform recovery and Reduction in CAPEX. Methodology: Based on various Benchmarking Tool such as Woodmac for similar projects in the region and economic affordability, a challenging target of 50% reduction in unit development cost (UDC) was set for the project. Technical scope was defined to the minimum as to be a wellhead platform with minimum functionality to ensure production. The evaluation of key project decisions like Well location and number, well design, Artificial lift methods and wellhead platform type under different development concept was carried out through integrated multi-discipline approach. Key elements influencing per platform recovery were Wellhead Platform (WHP) location, Well count, well reach and well productivity. Major Findings: Reservoir being shallow posed challenges in well design (dog-leg severity, casing size and the achievable step-out), choice of artificial lift and sand-control method. Integrated approach amongst relevant disciplines with challenging mind-set enabled to achieve optimized set of development decisions. This led to significant improvement in per platform recovery. It was concluded that platform recovery largely depended on the reach of the well. Choice of slim well design enabled designing of high inclination and better productivity wells. However, there is trade-off between high inclination Gas Lift (GL) wells and low inclination wells in terms of long term value, operational complexity, well reach, recovery and uptime. Well design element like casing size, well completion, artificial lift and sand control were added successively over the minimum technical scope design leading to a value and risk staircase. Logical combinations of options (slim well, GL) were competitively screened to achieve 25% reduction in well cost. Facility cost reduction was achieved through sourcing standardized Low Cost Facilities platform in combination with portfolio execution to maximizing execution efficiency; this approach is expected to reduce facilities cost by ~23% with respect to the development costs. Further cost reductions were achieved by maximizing use of existing facilities nearby; changing reliance on existing water injection wells and utilizing existing water injector (W.I.) platform for new injectors. Conclusion: The study provides a spectrum of technically feasible options. It also made clear that different drivers lead to different development concepts and the cost value trade off staircase made this very visible. Scoping of the project through competitive way has proven to be valuable for decision makers by creating a transparent view of value and associated risks/uncertainty/trade-offs for difficult choices: elements of the projects can be competitive, whilst other parts will struggle, even though contributing to significant volumes. Reduction in UDC through proper scoping of present projects and its benchmarking paves as a learning for the development of marginal fields across the world, especially in this low oil price scenario. This way of developing a field has on average a reduction of 40% of cost for the Shell projects.

Keywords: benchmarking, full field development, CAPEX, feasibility

Procedia PDF Downloads 159
5 Structural Characteristics of HPDSP Concrete on Beam Column Joints

Authors: Hari Krishan Sharma, Sanjay Kumar Sharma, Sushil Kumar Swar

Abstract:

Inadequate transverse reinforcement is considered as the main reason for the beam column joint shear failure observed during recent earthquakes. DSP matrix consists of cement and high content of micro-silica with low water to cement ratio while the aggregates are graded quartz sand. The use of reinforcing fibres leads not only to the increase of tensile/bending strength and specific fracture energy, but also to reduction of brittleness and, consequently, to production of non-explosive ruptures. Besides, fibre-reinforced materials are more homogeneous and less sensitive to small defects and flaws. Recent works on the freeze-thaw durability (also in the presence of de-icing salts) of fibre-reinforced DSP confirm the excellent behaviour in the expected long term service life.DSP materials, including fibre-reinforced DSP and CRC (Compact Reinforced Composites) are obtained by using high quantities of super plasticizers and high volumes of micro-silica. Steel fibres with high tensile yield strength of smaller diameter and short length in different fibre volume percentage and aspect ratio tilized to improve the performance by reducing the brittleness of matrix material. In the case of High Performance Densified Small Particle Concrete (HPDSPC), concrete is dense at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. Beam-column sub-assemblages used as moment resisting constructed using HPDSPC in the joint region with varying quantities of steel fibres, fibre aspect ratio and fibre orientation in the critical section. These HPDSPC in the joint region sub-assemblages tested under cyclic/earthquake loading. Besides loading measurements, frame displacements, diagonal joint strain and rebar strain adjacent to the joint will also be measured to investigate stress-strain behaviour, load deformation characteristics, joint shear strength, failure mechanism, ductility associated parameters, stiffness and energy dissipated parameters of the beam column sub-assemblages also evaluated. Finally a design procedure for the optimum design of HPDSPC corresponding to moment, shear forces and axial forces for the reinforced concrete beam-column joint sub-assemblage proposed. The fact that the implementation of material brittleness measure in the design of RC structures can improve structural reliability by providing uniform safety margins over a wide range of structural sizes and material compositions well recognized in the structural design and research. This lead to the development of high performance concrete for the optimized combination of various structural ratios in concrete for the optimized combination of various structural properties. The structural applications of HPDSPC, because of extremely high strength, will reduce dead load significantly as compared to normal weight concrete thereby offering substantial cost saving and by providing improved seismic response, longer spans, and thinner sections, less reinforcing steel and lower foundation cost. These cost effective parameters will make this material more versatile for use in various structural applications like beam-column joints in industries, airports, parking areas, docks, harbours, and also containers for hazardous material, safety boxes and mould & tools for polymer composites and metals.

Keywords: high performance densified small particle concrete (HPDSPC), steel fibre reinforced concrete (SFRC), slurry infiltrated concrete (SIFCON), Slurry infiltrated mat concrete (SIMCON)

Procedia PDF Downloads 303
4 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework

Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard

Abstract:

Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.

Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health

Procedia PDF Downloads 141
3 SEAWIZARD-Multiplex AI-Enabled Graphene Based Lab-On-Chip Sensing Platform for Heavy Metal Ions Monitoring on Marine Water

Authors: M. Moreno, M. Alique, D. Otero, C. Delgado, P. Lacharmoise, L. Gracia, L. Pires, A. Moya

Abstract:

Marine environments are increasingly threatened by heavy metal contamination, including mercury (Hg), lead (Pb), and cadmium (Cd), posing significant risks to ecosystems and human health. Traditional monitoring techniques often fail to provide the spatial and temporal resolution needed for real-time detection of these contaminants, especially in remote or harsh environments. SEAWIZARD addresses these challenges by leveraging the flexibility, adaptability, and cost-effectiveness of printed electronics, with the integration of microfluidics to develop a compact, portable, and reusable sensor platform designed specifically for real-time monitoring of heavy metal ions in seawater. The SEAWIZARD sensor is a multiparametric Lab-on-Chip (LoC) device, a miniaturized system that integrates several laboratory functions into a single chip, drastically reducing sample volumes and improving adaptability. This platform integrates three printed graphene electrodes for the simultaneous detection of Hg, Cd and Pb via square wave voltammetry. These electrodes share the reference and the counter electrodes to improve space efficiency. Additionally, it integrates printed pH and temperature sensors to correct environmental interferences that may impact the accuracy of metal detection. The pH sensor is based on a carbon electrode with iridium oxide electrodeposited while the temperature sensor is graphene based. A protective dielectric layer is printed on top of the sensor to safeguard it in harsh marine conditions. The use of flexible polyethylene terephthalate (PET) as the substrate enables the sensor to conform to various surfaces and operate in challenging environments. One of the key innovations of SEAWIZARD is its integrated microfluidic layer, fabricated from cyclic olefin copolymer (COC). This microfluidic component allows a controlled flow of seawater over the sensing area, allowing for significant improved detection limits compared to direct water sampling. The system’s dual-channel design separates the detection of heavy metals from the measurement of pH and temperature, ensuring that each parameter is measured under optimal conditions. In addition, the temperature sensor is finely tuned with a serpentine-shaped microfluidic channel to ensure precise thermal measurements. SEAWIZARD also incorporates custom electronics that allow for wireless data transmission via Bluetooth, facilitating rapid data collection and user interface integration. Embedded artificial intelligence further enhances the platform by providing an automated alarm system, capable of detecting predefined metal concentration thresholds and issuing warnings when limits are exceeded. This predictive feature enables early warnings of potential environmental disasters, such as industrial spills or toxic levels of heavy metal pollutants, making SEAWIZARD not just a detection tool, but a comprehensive monitoring and early intervention system. In conclusion, SEAWIZARD represents a significant advancement in printed electronics applied to environmental sensing. By combining flexible, low-cost materials with advanced microfluidics, custom electronics, and AI-driven intelligence, SEAWIZARD offers a highly adaptable and scalable solution for real-time, high-resolution monitoring of heavy metals in marine environments. Its compact and portable design makes it an accessible, user-friendly tool with the potential to transform water quality monitoring practices and provide critical data to protect marine ecosystems from contamination-related risks.

Keywords: lab-on-chip, printed electronics, real-time monitoring, microfluidics, heavy metal contamination

Procedia PDF Downloads 34
2 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 68
1 Acute Severe Hyponatremia in Patient with Psychogenic Polydipsia, Learning Disability and Epilepsy

Authors: Anisa Suraya Ab Razak, Izza Hayat

Abstract:

Introduction: The diagnosis and management of severe hyponatremia in neuropsychiatric patients present a significant challenge to physicians. Several factors contribute, including diagnostic shadowing and attributing abnormal behavior to intellectual disability or psychiatric conditions. Hyponatraemia is the commonest electrolyte abnormality in the inpatient population, ranging from mild/asymptomatic, moderate to severe levels with life-threatening symptoms such as seizures, coma and death. There are several documented fatal case reports in the literature of severe hyponatremia secondary to psychogenic polydipsia, often diagnosed only in autopsy. This paper presents a case study of acute severe hyponatremia in a neuropsychiatric patient with early diagnosis and admission to intensive care. Case study: A 21-year old Caucasian male with known epilepsy and learning disability was admitted from residential living with generalized tonic-clonic self-terminating seizures after refusing medications for several weeks. Evidence of superficial head injury was detected on physical examination. His laboratory data demonstrated mild hyponatremia (125 mmol/L). Computed tomography imaging of his brain demonstrated no acute bleed or space-occupying lesion. He exhibited abnormal behavior - restlessness, drinking water from bathroom taps, inability to engage, paranoia, and hypersexuality. No collateral history was available to establish his baseline behavior. He was loaded with intravenous sodium valproate and leveritircaetam. Three hours later, he developed vomiting and a generalized tonic-clonic seizure lasting forty seconds. He remained drowsy for several hours and regained minimal recovery of consciousness. A repeat set of blood tests demonstrated profound hyponatremia (117 mmol/L). Outcomes: He was referred to intensive care for peripheral intravenous infusion of 2.7% sodium chloride solution with two-hourly laboratory monitoring of sodium concentration. Laboratory monitoring identified dangerously rapid correction of serum sodium concentration, and hypertonic saline was switched to a 5% dextrose solution to reduce the risk of acute large-volume fluid shifts from the cerebral intracellular compartment to the extracellular compartment. He underwent urethral catheterization and produced 8 liters of urine over 24 hours. Serum sodium concentration remained stable after 24 hours of correction fluids. His GCS recovered to baseline after 48 hours with improvement in behavior -he engaged with healthcare professionals, understood the importance of taking medications, admitted to illicit drug use and drinking massive amounts of water. He was transferred from high-dependency care to ward level and was initiated on multiple trials of anti-epileptics before achieving seizure-free days two weeks after resolution of acute hyponatremia. Conclusion: Psychogenic polydipsia is often found in young patients with intellectual disability or psychiatric disorders. Patients drink large volumes of water daily ranging from ten to forty liters, resulting in acute severe hyponatremia with mortality rates as high as 20%. Poor outcomes are due to challenges faced by physicians in making an early diagnosis and treating acute hyponatremia safely. A low index of suspicion of water intoxication is required in this population, including patients with known epilepsy. Monitoring urine output proved to be clinically effective in aiding diagnosis. Early referral and admission to intensive care should be considered for safe correction of sodium concentration while minimizing risk of fatal complications e.g. central pontine myelinolysis.

Keywords: epilepsy, psychogenic polydipsia, seizure, severe hyponatremia

Procedia PDF Downloads 123