Search results for: root uptake models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8331

Search results for: root uptake models

21 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria

Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde

Abstract:

An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.

Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component

Procedia PDF Downloads 117
20 Cardiolipin-Incorporated Liposomes Carrying Curcumin and Nerve Growth Factor to Rescue Neurons from Apoptosis for Alzheimer’s Disease Treatment

Authors: Yung-Chih Kuo, Che-Yu Lin, Jay-Shake Li, Yung-I Lou

Abstract:

Curcumin (CRM) and nerve growth factor (NGF) were entrapped in liposomes (LIP) with cardiolipin (CL) to downregulate the phosphorylation of mitogen-activated protein kinases for Alzheimer’s disease (AD) management. AD belongs to neurodegenerative disorder with a gradual loss of memory, yielding irreversible dementia. CL-conjugated LIP loaded with CRM (CRM-CL/LIP) and that with NGF (NGF-CL/LIP) were applied to AD models of SK-N-MC cells and Wistar rats with an insult of β-amyloid peptide (Aβ). Lipids comprising 1,2-dipalmitoyl-sn-glycero-3- phosphocholine (Avanti Polar Lipids, Alabaster, AL), 1',3'-bis[1,2- dimyristoyl-sn-glycero-3-phospho]-sn-glycerol (CL; Avanti Polar Lipids), 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N- [methoxy(polyethylene glycol)-2000] (Avanti Polar Lipids), 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[carboxy(polyethylene glycol)-2000] (Avanti Polar Lipids) and CRM (Sigma–Aldrich, St. Louis, MO) were dissolved in chloroform (J. T. Baker, Phillipsburg, NJ) and condensed using a rotary evaporator (Panchum, Kaohsiung, Taiwan). Human β-NGF (Alomone Lab, Jerusalem, Israel) was added in the aqueous phase. Wheat germ agglutinin (WGA; Medicago AB, Uppsala, Sweden) was grafted on LIP loaded with CRM for (WGA-CRM-LIP) and CL-conjugated LIP loaded with CRM (WGA-CRM-CL/LIP) using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (Sigma–Aldrich) and N-hydroxysuccinimide (Alfa Aesar, Ward Hill, MA). The protein samples of SK-N-MC cells (American Type Tissue Collection, Rockville, MD) were used for sodium dodecyl sulfate (Sigma–Aldrich) polyacrylamide gel (Sigma–Aldrich) electrophoresis. In animal study, the LIP formulations were administered by intravenous injection via a tail vein of male Wistar rats (250–280 g, 8 weeks, BioLasco, Taipei, Taiwan), which were housed in the Animal Laboratory of National Chung Cheng University in accordance with the institutional guidelines and the guidelines of Animal Protection Committee under the Council of Agriculture of the Republic of China. We found that CRM-CL/LIP could inhibit the expressions of phosphorylated p38 (p-p38), p-Jun N-terminal kinase (p-JNK), and p-tau protein at serine 202 (p-Ser202) to retard the neuronal apoptosis. Free CRM and released CRM from CRM-LIP and CRM-CL/LIP were not in a straightforward manner to effectively inhibit the expression of p-p38 and p-JNK in the cytoplasm. In addition, NGF-CL/LIP enhanced the quantities of p-neurotrophic tyrosine kinase receptor type 1 (p-TrkA) and p-extracellular-signal-regulated kinase 5 (p-ERK5), preventing the Aβ-induced degeneration of neurons. The membrane fusion of NGF-LIP activated the ERK5 pathway and the targeting capacity of NGF-CL/LIP enhanced the possibility of released NGF to affect the TrkA level. Moreover, WGA-CRM-LIP improved the permeation of CRM across the blood–brain barrier (BBB) and significantly reduced the Aβ plaque deposition and malondialdehyde level and increased the percentage of normal neurons and cholinergic function in the hippocampus of AD rats. This was mainly because the encapsulated CRM was protected by LIP against a rapid degradation in the blood. Furthermore, WGA on LIP could target N-acetylglucosamine on endothelia and increased the quantity of CRM transported across the BBB. In addition, WGA-CRM-CL/LIP could be effective in suppressing the synthesis of acetylcholinesterase and reduced the decomposition of acetylcholine for better neurotransmission. Based on the in vitro and in vivo evidences, WGA-CRM-CL/LIP can rescue neurons from apoptosis in the brain and can be a promising drug delivery system for clinical AD therapy.

Keywords: Alzheimer’s disease, β-amyloid, liposome, mitogen-activated protein kinase

Procedia PDF Downloads 330
19 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence

Authors: Muhammad Bilal Shaikh

Abstract:

Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.

Keywords: multimodal AI, computer vision, NLP, mineral processing, mining

Procedia PDF Downloads 68
18 Social Enterprises over Microfinance Institutions: The Challenges of Governance and Management

Authors: Dean Sinković, Tea Golja, Morena Paulišić

Abstract:

Upon the end of the vicious war in former Yugoslavia in 1995, international development community widely promoted microfinance as the key development framework to eradicate poverty, create jobs, increase income. Widespread claims were made that microfinance institutions would play vital role in creating a bedrock for sustainable ‘bottom-up’ economic development trajectory, thus, helping newly formed states to find proper way from economic post-war depression. This uplifting neoliberal narrative has no empirical support in the Republic of Croatia. Firstly, the type of enterprises created via microfinance sector are small, unskilled, labor intensive, no technology and with huge debt burden. This results in extremely high failure rates of microenterprises and poor individuals plunging into even deeper poverty, acute indebtedness and social marginalization. Secondly, evidence shows that microcredit is exact reflection of dangerous and destructive sub-prime lending model with ‘boom-to-bust’ scenarios in which benefits are solely extracted by the tiny financial and political elite working around the microfinance sector. We argue that microcredit providers are not proper financial structures through which developing countries should look way out of underdevelopment and poverty. In order to achieve sustainable long-term growth goals, public policy needs to focus on creating, supporting and facilitating the small and mid-size enterprises development. These enterprises should be technically sophisticated, capable of creating new capabilities and innovations, with managerial expertise (skills formation) and inter-connected with other organizations (i.e. clusters, networks, supply chains, etc.). Evidence from South-East Europe suggest that such structures are not created via microfinance model but can be fostered through various forms of social enterprises. Various legal entities may operate as social enterprises: limited liability private company, limited liability public company, cooperative, associations, foundations, institutions, Mutual Insurances and Credit union. Our main hypothesis is that cooperatives are potential agents of social and economic transformation and community development in the region. Financial cooperatives are structures that can foster more efficient allocation of financial resources involving deeper democratic arrangements and more socially just outcomes. In Croatia, pioneers of the first social enterprises were civil society organizations whilst forming a separated legal entity. (i.e. cooperatives, associations, commercial companies working on the principles of returning the investment to the founder). Ever since 1995 cooperatives in Croatia have not grown by pursuing their own internal growth but mostly by relying on external financial support. The greater part of today’s registered cooperatives tend to be agricultural (39%), followed by war veterans cooperatives (38%) and others. There are no financial cooperatives in Croatia. Due to the above mentioned we look at the historical developments and the prevailing social enterprises forms and discuss their advantages and disadvantages as potential agents for social and economic transformation and community development in the region. There is an evident lack of understanding of this business model and of its potential for social and economic development followed by an unfavorable institutional environment. Thus, we discuss the role of governance and management in the formation of social enterprises in Croatia, stressing the challenges for the governance of the country’s social enterprise movement.

Keywords: financial cooperatives, governance and management models, microfinance institutions, social enterprises

Procedia PDF Downloads 275
17 Anti-Infective Potential of Selected Philippine Medicinal Plant Extracts against Multidrug-Resistant Bacteria

Authors: Demetrio L. Valle Jr., Juliana Janet M. Puzon, Windell L. Rivera

Abstract:

From the various medicinal plants available in the Philippines, crude ethanol extracts of twelve (12) Philippine medicinal plants, namely: Senna alata L. Roxb. (akapulko), Psidium guajava L. (bayabas), Piper betle L. (ikmo), Vitex negundo L. (lagundi), Mitrephora lanotan (Blanco) Merr. (Lanotan), Zingiber officinale Roscoe (luya), Curcuma longa L. (Luyang dilaw), Tinospora rumphii Boerl (Makabuhay), Moringga oleifera Lam. (malunggay), Phyllanthus niruri L. (sampa-sampalukan), Centella asiatica (L.) Urban (takip kuhol), and Carmona retusa (Vahl) Masam (tsaang gubat) were studied. In vitro methods of evaluation against selected Gram-positive and Gram-negative multidrug-resistant (MDR), bacteria were performed on the plant extracts. Although five of the plants showed varying antagonistic activities against the test organisms, only Piper betle L. exhibited significant activities against both Gram-negative and Gram-positive multidrug-resistant bacteria, exhibiting wide zones of growth inhibition in the disk diffusion assay, and with the lowest concentrations of the extract required to inhibit the growth of the bacteria, as supported by the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) assays. Further antibacterial studies of the Piper betle L. leaf, obtained by three extraction methods (ethanol, methanol, supercritical CO2), revealed similar inhibitory activities against a multitude of Gram-positive and Gram-negative MDR bacteria. Thin layer chromatography (TLC) assay of the leaf extract revealed a maximum of eight compounds with Rf values of 0.92, 0.86, 0.76, 0.53, 0.40, 0.25, 0.13, and 0.013, best visualized when inspected under UV-366 nm. TLC- agar overlay bioautography of the isolated compounds showed the compounds with Rf values of 0.86 and 0.13 having inhibitory activities against Gram-positive MDR bacteria (MRSA and VRE). The compound with an Rf value of 0.86 also possesses inhibitory activity against Gram-negative MDR bacteria (CRE Klebsiella pneumoniae and MBL Acinetobacter baumannii). Gas Chromatography-Mass Spectrometry (GC-MS) was able to identify six volatile compounds, four of which are new compounds that have not been mentioned in the medical literature. The chemical compounds isolated include 4-(2-propenyl)phenol and eugenol; and the new four compounds were ethyl diazoacetate, tris(trifluoromethyl)phosphine, heptafluorobutyrate, and 3-fluoro-2-propynenitrite. Phytochemical screening and investigation of its antioxidant, cytotoxic, possible hemolytic activities, and mechanisms of antibacterial activity were also done. The results showed that the local variant of Piper betle leaf extract possesses significant antioxidant, anti-cancer and antimicrobial properties, attributed to the presence of bioactive compounds, particularly of flavonoids (condensed tannin, leucoanthocyanin, gamma benzopyrone), anthraquinones, steroids/triterpenes and 2-deoxysugars. Piper betle L. is also traditionally known to enhance wound healing, which could be primarily due to its antioxidant, anti-inflammatory and antimicrobial activities. In vivo studies on mice using 2.5% and 5% of the ethanol leaf extract cream formulations in the excised wound models significantly increased the process of wound healing in the mice subjects, the results and values of which are at par with the current antibacterial cream (Mupirocin). From the results of the series of studies, we have definitely proven the value of Piper betle L. as a source of bioactive compounds that could be developed into therapeutic agents against MDR bacteria.

Keywords: Philippine herbal medicine, multidrug-resistant bacteria, Piper betle, TLC-bioautography

Procedia PDF Downloads 768
16 EcoTeka, an Open-Source Software for Urban Ecosystem Restoration through Technology

Authors: Manon Frédout, Laëtitia Bucari, Mathias Aloui, Gaëtan Duhamel, Olivier Rovellotti, Javier Blanco

Abstract:

Ecosystems must be resilient to ensure cleaner air, better water and soil quality, and thus healthier citizens. Technology can be an excellent tool to support urban ecosystem restoration projects, especially when based on Open Source and promoting Open Data. This is the goal of the ecoTeka application: one single digital tool for tree management which allows decision-makers to improve their urban forestry practices, enabling more responsible urban planning and climate change adaptation. EcoTeka provides city councils with three main functionalities tackling three of their challenges: easier biodiversity inventories, better green space management, and more efficient planning. To answer the cities’ need for reliable tree inventories, the application has been first built with open data coming from the websites OpenStreetMap and OpenTrees, but it will also include very soon the possibility of creating new data. To achieve this, a multi-source algorithm will be elaborated, based on existing artificial intelligence Deep Forest, integrating open-source satellite images, 3D representations from LiDAR, and street views from Mapillary. This data processing will permit identifying individual trees' position, height, crown diameter, and taxonomic genus. To support urban forestry management, ecoTeka offers a dashboard for monitoring the city’s tree inventory and trigger alerts to inform about upcoming due interventions. This tool was co-constructed with the green space departments of the French cities of Alès, Marseille, and Rouen. The third functionality of the application is a decision-making tool for urban planning, promoting biodiversity and landscape connectivity metrics to drive ecosystem restoration roadmap. Based on landscape graph theory, we are currently experimenting with new methodological approaches to scale down regional ecological connectivity principles to local biodiversity conservation and urban planning policies. This methodological framework will couple graph theoretic approach and biological data, mainly biodiversity occurrences (presence/absence) data available on both international (e.g., GBIF), national (e.g., Système d’Information Nature et Paysage) and local (e.g., Atlas de la Biodiversté Communale) biodiversity data sharing platforms in order to help reasoning new decisions for ecological networks conservation and restoration in urban areas. An experiment on this subject is currently ongoing with Montpellier Mediterranee Metropole. These projects and studies have shown that only 26% of tree inventory data is currently geo-localized in France - the rest is still being done on paper or Excel sheets. It seems that technology is not yet used enough to enrich the knowledge city councils have about biodiversity in their city and that existing biodiversity open data (e.g., occurrences, telemetry, or genetic data), species distribution models, landscape graph connectivity metrics are still underexploited to make rational decisions for landscape and urban planning projects. This is the goal of ecoTeka: to support easier inventories of urban biodiversity and better management of urban spaces through rational planning and decisions relying on open databases. Future studies and projects will focus on the development of tools for reducing the artificialization of soils, selecting plant species adapted to climate change, and highlighting the need for ecosystem and biodiversity services in cities.

Keywords: digital software, ecological design of urban landscapes, sustainable urban development, urban ecological corridor, urban forestry, urban planning

Procedia PDF Downloads 70
15 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid

Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang

Abstract:

Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.

Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal

Procedia PDF Downloads 77
14 Socio-Sensorial Assessment of Nursing Homes in Singapore: Towards Integrated Enabling Design

Authors: Zdravko Trivic, John Chye Fung, Ruzica Bozovic-Stamenovic

Abstract:

Within the context of rapidly ageing population in Singapore and the pressing demands on both caregivers and care providers, an integrated approach to ageing-friendly and ability-sensitive enabling environment becomes an imperative. This particularly applies to nursing home environments and their immediate surroundings, as they are becoming one of the main available options of long-term care for many senior adults who are unable to age at home. Yet, despite the considerable efforts to break the still predominant clinical approach to eldercare and to introduce more home-like design and person-centric care model, nursing homes keep being stigmatised and perceived as not so desirable environments to grow old in. The challenges are further emphasised by the associated physical, sensorial, psychological and cognitive declines that are the common consequences of ageing. Such declines have an immense impact on almost all aspects of older adults’ daily functioning, including problems with mobility and spatial orientation, difficulties in communication, withdrawal from social interaction, higher level of depression and decreased sense of independence and autonomy. However, typical nursing home designs tend to neglect the full capacities of balanced and carefully integrated multisensory stimuli as active component of care and ability building. This paper outlines part of a larger multi-disciplinary study of six nursing homes in Singapore, with overarching objectives to create new models of supportive nursing home environments that go beyond the clinical care model and encourage community integration with the nursing home settings. The paper focuses on the largely neglected aspects of sensorial comfort and multi-sensorial properties of nursing homes, including both indoor and immediate outdoor spaces (boundaries). The objective was to investigate the sensory rhythms and explore their role in nursing home users’ daily routine and therapeutic capacities. Socio-sensory rhythms were captured and analysed through a combination of on-site sensory recordings of “objective” quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, perceived experienced data, spatial mapping, first-person observations of nursing home users’ activity patterns, and interviews. This was done in addition to employment of available assessment tools, such as Wisconsin Person Directed Care assessment tool, Dementia Quality of Life [DQoL] instrument, and Resident Environment Impact Scale [REIS], as these tools address the issues of sensorial experience insufficiently and selectively. Key findings indicate varied levels of sensory comfort, as well as diversity, intensity, and customisation of multi-sensory conditions within different nursing home spaces. Sensory stimulation is typically concentrated in communal living areas of the nursing homes or in the areas that often provide controlled or limited access, including specifically designed sensory rooms and outdoor green spaces (gardens and terraces). Opportunities for sensory stimulation are particularly limited for bed-bound senior residents and within more functional areas, such as corridors. This suggests that the capacities of nursing home designs to provide more diverse and better integrated pleasant sensory conditions as integrated “therapeutic devices” to build nursing home residents’ physical and mental abilities, encourage activity and improve wellbeing are far from exhausted.

Keywords: ageing-supportive environment, enabling design, multi-sensory assessment, nursing home environment

Procedia PDF Downloads 172
13 Investigation of Delamination Process in Adhesively Bonded Hardwood Elements under Changing Environmental Conditions

Authors: M. M. Hassani, S. Ammann, F. K. Wittel, P. Niemz, H. J. Herrmann

Abstract:

Application of engineered wood, especially in the form of glued-laminated timbers has increased significantly. Recent progress in plywood made of high strength and high stiffness hardwoods, like European beech, gives designers in general more freedom by increased dimensional stability and load-bearing capacity. However, the strong hygric dependence of basically all mechanical properties renders many innovative ideas futile. The tendency of hardwood for higher moisture sorption and swelling coefficients lead to significant residual stresses in glued-laminated configurations, cross-laminated patterns in particular. These stress fields cause initiation and evolution of cracks in the bond-lines resulting in: interfacial de-bonding, loss of structural integrity, and reduction of load-carrying capacity. Subsequently, delamination of glued-laminated timbers made of hardwood elements can be considered as the dominant failure mechanism in such composite elements. In addition, long-term creep and mechano-sorption under changing environmental conditions lead to loss of stiffness and can amplify delamination growth over the lifetime of a structure even after decades. In this study we investigate the delamination process of adhesively bonded hardwood (European beech) elements subjected to changing climatic conditions. To gain further insight into the long-term performance of adhesively bonded elements during the design phase of new products, the development and verification of an authentic moisture-dependent constitutive model for various species is of great significance. Since up to now, a comprehensive moisture-dependent rheological model comprising all possibly emerging deformation mechanisms was missing, a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive material model for wood, with all material constants being defined as a function of moisture content, was developed. Apart from the solid wood adherends, adhesive layer also plays a crucial role in the generation and distribution of the interfacial stresses. Adhesive substance can be treated as a continuum layer constructed from finite elements, represented as a homogeneous and isotropic material. To obtain a realistic assessment on the mechanical performance of the adhesive layer and a detailed look at the interfacial stress distributions, a generic constitutive model including all potentially activated deformation modes, namely elastic, plastic, and visco-elastic creep was developed. We focused our studies on the three most common adhesive systems for structural timber engineering: one-component polyurethane adhesive (PUR), melamine-urea-formaldehyde (MUF), and phenol-resorcinol-formaldehyde (PRF). The corresponding numerical integration approaches, with additive decomposition of the total strain are implemented within the ABAQUS FEM environment by means of user subroutine UMAT. To predict the true stress state, we perform a history dependent sequential moisture-stress analysis using the developed material models for both wood substrate and adhesive layer. Prediction of the delamination process is founded on the fracture mechanical properties of the adhesive bond-line, measured under different levels of moisture content and application of the cohesive interface elements. Finally, we compare the numerical predictions with the experimental observations of de-bonding in glued-laminated samples under changing environmental conditions.

Keywords: engineered wood, adhesive, material model, FEM analysis, fracture mechanics, delamination

Procedia PDF Downloads 436
12 A Study of the Trap of Multi-Homing in Customers: A Comparative Case Study of Digital Payments

Authors: Shari S. C. Shang, Lynn S. L. Chiu

Abstract:

In the digital payment market, some consumers use only one payment wallet while many others play multi-homing with a variety of payment services. With the diffusion of new payment systems, we examined the determinants of the adoption of multi-homing behavior. This study aims to understand how a digital payment provider dynamically expands business touch points with cross-business strategies to enrich the digital ecosystem and avoid the trap of multi-homing in customers. By synthesizing platform ecosystem literature, we constructed a two-dimensional research framework with one determinant of user digital behavior from offline to online intentions and the other determinant of digital payment touch points from convenient accessibility to cross-business platforms. To explore on a broader scale, we selected 12 digital payments from 5 countries of UK, US, Japan, Korea, and Taiwan. With the interplays of user digital behaviors and payment touch points, we group the study cases into four types: (1) Channel Initiated: users originated from retailers with high access to in-store shopping with face-to-face guidance for payment adoption. Providers offer rewards for customer loyalty and secure the retailer’s efficient cash flow management. (2) Social Media Dependent: users usually are digital natives with high access to social media or the internet who shop and pay digitally. Providers might not own physical or online shops but are licensed to aggregate money flows through virtual ecosystems. (3) Early Life Engagement: digital banks race to capture the next generation from popularity to profitability. This type of payment aimed to give children a taste of financial freedom while letting parents track their spending. Providers are to capitalize on the digital payment and e-commerce boom and hold on to new customers into adulthood. (4) Traditional Banking: plastic credit cards are purposely designed as a control group to track the evolvement of business strategies in digital payments. Traditional credit card users may follow the bank’s digital strategy to land on different types of digital wallets or mostly keep using plastic credit cards. This research analyzed business growth models and inter-firms’ coopetition strategies of the selected cases. Results of the multiple case analysis reveal that channel initiated payments bundled rewards with retailer’s business discount for recurring purchases. They also extended other financial services, such as insurance, to fulfill customers’ new demands. Contrastively, social media dependent payments developed new usages and new value creation, such as P2P money transfer through network effects among the virtual social ties, while early life engagements offer virtual banking products to children who are digital natives but overlooked by incumbents. It has disrupted the banking business domains in preparation for the metaverse economy. Lastly, the control group of traditional plastic credit cards has gradually converted to a BaaS (banking as a service) model depending on customers’ preferences. The multi-homing behavior is not avoidable in digital payment competitions. Payment providers may encounter multiple waves of a multi-homing threat after a short period of success. A dynamic cross-business collaboration strategy should be explored to continuously evolve the digital ecosystems and allow users for a broader shopping experience and continual usage.

Keywords: digital payment, digital ecosystems, multihoming users, cross business strategy, user digital behavior intentions

Procedia PDF Downloads 160
11 Optimizing AI Voice for Adolescent Health Education: Preferences and Trustworthiness Across Teens and Parent

Authors: Yu-Lin Chen, Kimberly Koester, Marissa Raymond-Flesh, Anika Thapar, Jay Thapar

Abstract:

Purpose: Effectively communicating adolescent health topics to teens and their parents is crucial. This study emphasizes critically evaluating the optimal use of artificial intelligence tools (AI), which are increasingly prevalent in disseminating health information. By fostering a deeper understanding of AI voice preference in the context of health, the research aspires to have a ripple effect, enhancing the collective health literacy and decision-making capabilities of both teenagers and their parents. This study explores AI voices' potential within health learning modules for annual well-child visits. We aim to identify preferred voice characteristics and understand factors influencing perceived trustworthiness, ultimately aiming to improve health literacy and decision-making in both demographics. Methods: A cross-sectional study assessed preferences and trust perceptions of AI voices in learning modules among teens (11-18) and their parents/guardians in Northern California. The study involved the development of four distinct learning modules covering various adolescent health-related topics, including general communication, sexual and reproductive health communication, parental monitoring, and well-child check-ups. Participants were asked to evaluate eight AI voices across the modules, considering a set of six factors such as intelligibility, naturalness, prosody, social impression, trustworthiness, and overall appeal, using Likert scales ranging from 1 to 10 (the higher, the better). They were also asked to select their preferred choice of voice for each module. Descriptive statistics summarized participant demographics. Chi-square/t-tests explored differences in voice preferences between groups. Regression models identified factors impacting the perceived trustworthiness of the top-selected voice per module. Results: Data from 104 participants (teen=63; adult guardian = 41) were included in the analysis. The mean age is 14.9 for teens (54% male) and 41.9 for the parent/guardian (12% male). At the same time, similar voice quality ratings were observed across groups, and preferences varied by topic. For instance, in general communication, teens leaned towards young female voices, while parents preferred mature female tones. Interestingly, this trend reversed for parental monitoring, with teens favoring mature male voices and parents opting for mature female ones. Both groups, however, converged on mature female voices for sexual and reproductive health topics. Beyond preferences, the study delved into factors influencing perceived trustworthiness. Interestingly, social impression and sound appeal emerged as the most significant contributors across all modules, jointly explaining 71-75% of the variance in trustworthiness ratings. Conclusion: The study emphasizes the importance of catering AI voices to specific audiences and topics. Social impression and sound appeal emerged as critical factors influencing perceived trustworthiness across all modules. These findings highlight the need to tailor AI voices by age and the specific health information being delivered. Ensuring AI voices resonate with both teens and their parents can foster their engagement and trust, ultimately leading to improved health literacy and decision-making for both groups. Limitations and future research: This study lays the groundwork for understanding AI voice preferences for teenagers and their parents in healthcare settings. However, limitations exist. The sample represents a specific geographic location, and cultural variations might influence preferences. Additionally, the modules focused on topics related to well-child visits, and preferences might differ for more sensitive health topics. Future research should explore these limitations and investigate the long-term impact of AI voice on user engagement, health outcomes, and health behaviors.

Keywords: artificial intelligence, trustworthiness, voice, adolescent

Procedia PDF Downloads 55
10 Computational Fluid Dynamics Simulation of a Nanofluid-Based Annular Solar Collector with Different Metallic Nano-Particles

Authors: Sireetorn Kuharat, Anwar Beg

Abstract:

Motivation- Solar energy constitutes the most promising renewable energy source on earth. Nanofluids are a very successful family of engineered fluids, which contain well-dispersed nanoparticles suspended in a stable base fluid. The presence of metallic nanoparticles (e.g. gold, silver, copper, aluminum etc) significantly improves the thermo-physical properties of the host fluid and generally results in a considerable boost in thermal conductivity, density, and viscosity of nanofluid compared with the original base (host) fluid. This modification in fundamental thermal properties has profound implications in influencing the convective heat transfer process in solar collectors. The potential for improving solar collector direct absorber efficiency is immense and to gain a deeper insight into the impact of different metallic nanoparticles on efficiency and temperature enhancement, in the present work, we describe recent computational fluid dynamics simulations of an annular solar collector system. The present work studies several different metallic nano-particles and compares their performance. Methodologies- A numerical study of convective heat transfer in an annular pipe solar collector system is conducted. The inner tube contains pure water and the annular region contains nanofluid. Three-dimensional steady-state incompressible laminar flow comprising water- (and other) based nanofluid containing a variety of metallic nanoparticles (copper oxide, aluminum oxide, and titanium oxide nanoparticles) is examined. The Tiwari-Das model is deployed for which thermal conductivity, specific heat capacity and viscosity of the nanofluid suspensions is evaluated as a function of solid nano-particle volume fraction. Radiative heat transfer is also incorporated using the ANSYS solar flux and Rosseland radiative models. The ANSYS FLUENT finite volume code (version 18.1) is employed to simulate the thermo-fluid characteristics via the SIMPLE algorithm. Mesh-independence tests are conducted. Validation of the simulations is also performed with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation achieved. The influence of volume fraction on temperature, velocity, pressure contours is computed and visualized. Main findings- The best overall performance is achieved with copper oxide nanoparticles. Thermal enhancement is generally maximized when water is utilized as the base fluid, although in certain cases ethylene glycol also performs very efficiently. Increasing nanoparticle solid volume fraction elevates temperatures although the effects are less prominent in aluminum and titanium oxide nanofluids. Significant improvement in temperature distributions is achieved with copper oxide nanofluid and this is attributed to the superior thermal conductivity of copper compared to other metallic nano-particles studied. Important fluid dynamic characteristics are also visualized including circulation and temperature shoots near the upper region of the annulus. Radiative flux is observed to enhance temperatures significantly via energization of the nanofluid although again the best elevation in performance is attained consistently with copper oxide. Conclusions-The current study generalizes previous investigations by considering multiple metallic nano-particles and furthermore provides a good benchmark against which to calibrate experimental tests on a new solar collector configuration currently being designed at Salford University. Important insights into the thermal conductivity and viscosity with metallic nano-particles is also provided in detail. The analysis is also extendable to other metallic nano-particles including gold and zinc.

Keywords: heat transfer, annular nanofluid solar collector, ANSYS FLUENT, metallic nanoparticles

Procedia PDF Downloads 143
9 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations

Authors: Nanine Fouche

Abstract:

The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.

Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance

Procedia PDF Downloads 175
8 Targeting Tumour Survival and Angiogenic Migration after Radiosensitization with an Estrone Analogue in an in vitro Bone Metastasis Model

Authors: Jolene M. Helena, Annie M. Joubert, Peace Mabeta, Magdalena Coetzee, Roy Lakier, Anne E. Mercier

Abstract:

Targeting the distant tumour and its microenvironment whilst preserving bone density is important in improving the outcomes of patients with bone metastases. 2-Ethyl-3-O-sulphamoyl-estra1,3,5(10)16-tetraene (ESE-16) is an in-silico-designed 2- methoxyestradiol analogue which aimed at enhancing the parent compound’s cytotoxicity and providing a more favourable pharmacokinetic profile. In this study, the potential radiosensitization effects of ESE-16 were investigated in an in vitro bone metastasis model consisting of murine pre-osteoblastic (MC3T3-E1) and pre-osteoclastic (RAW 264.7) bone cells, metastatic prostate (DU 145) and breast (MDA-MB-231) cancer cells, as well as human umbilical vein endothelial cells (HUVECs). Cytotoxicity studies were conducted on all cell lines via spectrophotometric quantification of 3-(4,5-dimethylthiazol-2-yl)-2,5- diphenyltetrazolium bromide. The experimental set-up consisted of flow cytometric analysis of cell cycle progression and apoptosis detection (Annexin V-fluorescein isothiocyanate) to determine the lowest ESE-16 and radiation doses to induce apoptosis and significantly reduce cell viability. Subsequent experiments entailed a 24-hour low-dose ESE-16-exposure followed by a single dose of radiation. Termination proceeded 2, 24 or 48 hours thereafter. The effect of the combination treatment was investigated on osteoclasts via tartrate-resistant acid phosphatase (TRAP) activity- and actin ring formation assays. Tumour cell experiments included investigation of mitotic indices via haematoxylin and eosin staining; pro-apoptotic signalling via spectrophotometric quantification of caspase 3; deoxyribonucleic acid (DNA) damage via micronuclei analysis and histone H2A.X phosphorylation (γ-H2A.X); and Western blot analyses of bone morphogenetic protein-7 and matrix metalloproteinase-9. HUVEC experiments included flow cytometric quantification of cell cycle progression and free radical production; fluorescent examination of cytoskeletal morphology; invasion and migration studies on an xCELLigence platform; and Western blot analyses of hypoxia-inducible factor 1-alpha and vascular endothelial growth factor receptor 1 and 2. Tumour cells yielded half-maximal growth inhibitory concentration (GI50) values in the nanomolar range. ESE-16 concentrations of 235 nM (DU 145) and 176 nM (MDA-MB-231) and a radiation dose of 4 Gy were found to be significant in cell cycle and apoptosis experiments. Bone and endothelial cells were exposed to the same doses as DU 145 cells. Cytotoxicity studies on bone cells reported that RAW 264.7 cells were more sensitive to the combination treatment than MC3T3-E1 cells. Mature osteoclasts were more sensitive than pre-osteoclasts with respect to TRAP activity. However, actin ring morphology was retained. The mitotic arrest was evident in tumour and endothelial cells in the mitotic index and cell cycle experiments. Increased caspase 3 activity and superoxide production indicated pro-apoptotic signalling in tumour and endothelial cells. Increased micronuclei numbers and γ-H2A.X foci indicated increased DNA damage in tumour cells. Compromised actin and tubulin morphologies and decreased invasion and migration were observed in endothelial cells. Western blot analyses revealed reduced metastatic and angiogenic signalling. ESE-16-induced radiosensitization inhibits metastatic signalling and tumour cell survival whilst preferentially preserving bone cells. This low-dose combination treatment strategy may promote the quality of life of patients with metastatic bone disease. Future studies will include 3-dimensional in-vitro and murine in-vivo models.

Keywords: angiogenesis, apoptosis, bone metastasis, cancer, cell migration, cytoskeleton, DNA damage, ESE-16, radiosensitization.

Procedia PDF Downloads 162
7 Blockchain Based Hydrogen Market (BBH₂): A Paradigm-Shifting Innovative Solution for Climate-Friendly and Sustainable Structural Change

Authors: Volker Wannack

Abstract:

Regional, national, and international strategies focusing on hydrogen (H₂) and blockchain are driving significant advancements in hydrogen and blockchain technology worldwide. These strategies lay the foundation for the groundbreaking "Blockchain Based Hydrogen Market (BBH₂)" project. The primary goal of this project is to develop a functional Blockchain Minimum Viable Product (B-MVP) for the hydrogen market. The B-MVP will leverage blockchain as an enabling technology with a common database and platform, facilitating secure and automated transactions through smart contracts. This innovation will revolutionize logistics, trading, and transactions within the hydrogen market. The B-MVP has transformative potential across various sectors. It benefits renewable energy producers, surplus energy-based hydrogen producers, hydrogen transport and distribution grid operators, and hydrogen consumers. By implementing standardized, automated, and tamper-proof processes, the B-MVP enhances cost efficiency and enables transparent and traceable transactions. Its key objective is to establish the verifiable integrity of climate-friendly "green" hydrogen by tracing its supply chain from renewable energy producers to end users. This emphasis on transparency and accountability promotes economic, ecological, and social sustainability while fostering a secure and transparent market environment. A notable feature of the B-MVP is its cross-border operability, eliminating the need for country-specific data storage and expanding its global applicability. This flexibility not only broadens its reach but also creates opportunities for long-term job creation through the establishment of a dedicated blockchain operating company. By attracting skilled workers and supporting their training, the B-MVP strengthens the workforce in the growing hydrogen sector. Moreover, it drives the emergence of innovative business models that attract additional company establishments and startups and contributes to long-term job creation. For instance, data evaluation can be utilized to develop customized tariffs and provide demand-oriented network capacities to producers and network operators, benefitting redistributors and end customers with tamper-proof pricing options. The B-MVP not only brings technological and economic advancements but also enhances the visibility of national and international standard-setting efforts. Regions implementing the B-MVP become pioneers in climate-friendly, sustainable, and forward-thinking practices, generating interest beyond their geographic boundaries. Additionally, the B-MVP serves as a catalyst for research and development, facilitating knowledge transfer between universities and companies. This collaborative environment fosters scientific progress, aligns with strategic innovation management, and cultivates an innovation culture within the hydrogen market. Through the integration of blockchain and hydrogen technologies, the B-MVP promotes holistic innovation and contributes to a sustainable future in the hydrogen industry. The implementation process involves evaluating and mapping suitable blockchain technology and architecture, developing and implementing the blockchain, smart contracts, and depositing certificates of origin. It also includes creating interfaces to existing systems such as nomination, portfolio management, trading, and billing systems, testing the scalability of the B-MVP to other markets and user groups, developing data formats for process-relevant data exchange, and conducting field studies to validate the B-MVP. BBH₂ is part of the "Technology Offensive Hydrogen" funding call within the research funding of the Federal Ministry of Economics and Climate Protection in the 7th Energy Research Programme of the Federal Government.

Keywords: hydrogen, blockchain, sustainability, innovation, structural change

Procedia PDF Downloads 168
6 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 150
5 The Impact of Neighborhood Effects on the Economic Mobility of the Inhabitants of Three Segregated Communities in Salvador (Brazil)

Authors: Stephan Treuke

Abstract:

The paper analyses the neighbourhood effects on the economic mobility of the inhabitants of three segregated communities of Salvador (Brazil), in other words, the socio-economic advantages and disadvantages affecting the lives of poor people due to their embeddedness in specific socio-residential contexts. Recent studies performed in Brazilian metropolis have concentrated on the structural dimensions of negative externalities in order to explain neighbourhood-level variations in a field of different phenomena (delinquency, violence, access to the labour market and education) in spatial isolated and socially homogeneous slum areas (favelas). However, major disagreement remains whether the contiguity between residents of poor neighbourhoods and higher-class condominio-dwellers provides structures of opportunities or whether it fosters socio-spatial stigmatization. Based on a set of interviews, investigating the variability of interpersonal networks and their activation in the struggle for economic inclusion, the study confirms that the proximity of Nordeste de Amaralina to middle-/upper-class communities affects positively the access to labour opportunities. Nevertheless, residential stigmatization, as well as structures of social segmentation, annihilate these potentials. The lack of exposition to individuals and groups extrapolating from the favela’s social, educational and cultural context restricts the structures of opportunities to local level. Therefore, residents´ interpersonal networks reveal a high degree of redundancy and localism, based on bonding ties connecting family and neighbourhood members. The resilience of segregational structures in Plataforma contributes to the naturalization of social distance patters. It’s embeddedness in a socially homogeneous residential area (Subúrbio Ferroviário), growing informally and beyond official urban politics, encourages the construction of isotopic patterns of sociability, sharing the same values, social preferences, perspectives and behaviour models. Whereas it’s spatial isolation correlates with the scarcity of economic opportunities, the social heterogeneity of Fazenda Grande II interviewees and the socialising effects of public institutions mitigate the negative repercussions of segregation. The networks’ composition admits a higher degree of heterophilia and a greater proportion of bridging ties accounting for the access to broader information actives and facilitating economic mobility. The variability observed within the three different scenarios urges to reflect about the responsability of urban politics when it comes to the prevention or consolidation of the social segregation process in Salvador. Instead of promoting the local development of the favela Plataforma, public housing programs priorize technocratic habitational solutions without providing the residents’ socio-economic integration. The impact of negative externalities related to the homogeneously poor neighbourhood is potencialized in peripheral areas, turning its’ inhabitants socially invisible, thus being isolated from other social groups. The example of Nordeste de Amaralina portrays the failing interest of urban politics to bridge the social distances structuring the brazilian society’s rigid stratification model, founded on mecanisms of segmentation (unequal access to labour market and education system, public transport, social security and law protection) and generating permanent conflicts between the two socioeconomically distant groups living in geographic contiguity. Finally, in the case of Fazenda Grande II, the public investments in both housing projects and complementary infrastructure (e.g. schools, hospitals, community center, police stations, recreation areas) contributes to the residents’ socio-economic inclusion.

Keywords: economic mobility, neighborhood effects, Salvador, segregation

Procedia PDF Downloads 279
4 Light Sensitive Plasmonic Nanostructures for Photonic Applications

Authors: Istvan Csarnovics, Attila Bonyar, Miklos Veres, Laszlo Himics, Attila Csik, Judit Kaman, Julia Burunkova, Geza Szanto, Laszlo Balazs, Sandor Kokenyesi

Abstract:

In this work, the performance of gold nanoparticles were investigated for stimulation of photosensitive materials for photonic applications. It was widely used for surface plasmon resonance experiments, not in the last place because of the manifestation of optical resonances in the visible spectral region. The localized surface plasmon resonance is rather easily observed in nanometer-sized metallic structures and widely used for measurements, sensing, in semiconductor devices and even in optical data storage. Firstly, gold nanoparticles on silica glass substrate satisfy the conditions for surface plasmon resonance in the green-red spectral range, where the chalcogenide glasses have the highest sensitivity. The gold nanostructures influence and enhance the optical, structural and volume changes and promote the exciton generation in gold nanoparticles/chalcogenide layer structure. The experimental results support the importance of localized electric fields in the photo-induced transformation of chalcogenide glasses as well as suggest new approaches to improve the performance of these optical recording media. Results may be utilized for direct, micrometre- or submicron size geometrical and optical pattern formation and used also for further development of the explanations of these effects in chalcogenide glasses. Besides of that, gold nanoparticles could be added to the organic light-sensitive material. The acrylate-based materials are frequently used for optical, holographic recording of optoelectronic elements due to photo-stimulated structural transformations. The holographic recording process and photo-polymerization effect could be enhanced by the localized plasmon field of the created gold nanostructures. Finally, gold nanoparticles widely used for electrochemical and optical sensor applications. Although these NPs can be synthesized in several ways, perhaps one of the simplest methods is the thermal annealing of pre-deposited thin films on glass or silicon surfaces. With this method, the parameters of the annealing process (time, temperature) and the pre-deposited thin film thickness influence and define the resulting size and distribution of the NPs on the surface. Localized surface plasmon resonance (LSPR) is a very sensitive optical phenomenon and can be utilized for a large variety of sensing purposes (chemical sensors, gas sensors, biosensors, etc.). Surface-enhanced Raman spectroscopy (SERS) is an analytical method which can significantly increase the yield of Raman scattering of target molecules adsorbed on the surface of metallic nanoparticles. The sensitivity of LSPR and SERS based devices is strongly depending on the used material and also on the size and geometry of the metallic nanoparticles. By controlling these parameters the plasmon absorption band can be tuned and the sensitivity can be optimized. The technological parameters of the generated gold nanoparticles were investigated and influence on the SERS and on the LSPR sensitivity was established. The LSPR sensitivity were simulated for gold nanocubes and nanospheres with MNPBEM Matlab toolbox. It was found that the enhancement factor (which characterize the increase in the peak shift for multi-particle arrangements compared to single-particle models) depends on the size of the nanoparticles and on the distance between the particles. This work was supported by GINOP- 2.3.2-15-2016-00041 project, which is co-financed by the European Union and European Social Fund. Istvan Csarnovics is grateful for the support through the New National Excellence Program of the Ministry of Human Capacities, supported by the ÚNKP-17-4 Attila Bonyár and Miklós Veres are grateful for the support of the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.

Keywords: light sensitive nanocomposites, metallic nanoparticles, photonic application, plasmonic nanostructures

Procedia PDF Downloads 306
3 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 66
2 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 42
1 Enhancing Disaster Resilience: Advanced Natural Hazard Assessment and Monitoring

Authors: Mariza Kaskara, Stella Girtsou, Maria Prodromou, Alexia Tsouni, Christodoulos Mettas, Stavroula Alatza, Kyriaki Fotiou, Marios Tzouvaras, Charalampos Kontoes, Diofantos Hadjimitsis

Abstract:

Natural hazard assessment and monitoring are crucial in managing the risks associated with fires, floods, and geohazards, particularly in regions prone to these natural disasters, such as Greece and Cyprus. Recent advancements in technology, developed by the BEYOND Center of Excellence of the National Observatory of Athens, have been successfully applied in Greece and are now set to be transferred to Cyprus. The implementation of these advanced technologies in Greece has significantly improved the country's ability to respond to these natural hazards. For wildfire risk assessment, a scalar wildfire occurrence risk index is created based on the predictions of machine learning models. Predicting fire danger is crucial for the sustainable management of forest fires as it provides essential information for designing effective prevention measures and facilitating response planning for potential fire incidents. A reliable forecast of fire danger is a key component of integrated forest fire management and is heavily influenced by various factors that affect fire ignition and spread. The fire risk model is validated by the sensitivity and specificity metric. For flood risk assessment, a multi-faceted approach is employed, including the application of remote sensing techniques, the collection and processing of data from the most recent population and building census, technical studies and field visits, as well as hydrological and hydraulic simulations. All input data are used to create precise flood hazard maps according to various flooding scenarios, detailed flood vulnerability and flood exposure maps, which will finally produce the flood risk map. Critical points are identified, and mitigation measures are proposed for the worst-case scenario, namely, refuge areas are defined, and escape routes are designed. Flood risk maps can assist in raising awareness and save lives. Validation is carried out through historical flood events using remote sensing data and records from the civil protection authorities. For geohazards monitoring (e.g., landslides, subsidence), Synthetic Aperture Radar (SAR) and optical satellite imagery are combined with geomorphological and meteorological data and other landslide/ground deformation contributing factors. To monitor critical infrastructures, including dams, advanced InSAR methodologies are used for identifying surface movements through time. Monitoring these hazards provides valuable information for understanding processes and could lead to early warning systems to protect people and infrastructure. Validation is carried out through both geotechnical expert evaluations and visual inspections. The success of these systems in Greece has paved the way for their transfer to Cyprus to enhance Cyprus's capabilities in natural hazard assessment and monitoring. This transfer is being made through capacity building activities, fostering continuous collaboration between Greek and Cypriot experts. Apart from the knowledge transfer, small demonstration actions are implemented to showcase the effectiveness of these technologies in real-world scenarios. In conclusion, the transfer of advanced natural hazard assessment technologies from Greece to Cyprus represents a significant step forward in enhancing the region's resilience to disasters. EXCELSIOR project funds knowledge exchange, demonstration actions and capacity-building activities and is committed to empower Cyprus with the tools and expertise to effectively manage and mitigate the risks associated with these natural hazards. Acknowledgement:Authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project.

Keywords: earth observation, monitoring, natural hazards, remote sensing

Procedia PDF Downloads 38