Search results for: rapid compression machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6116

Search results for: rapid compression machine

56 A Risk-Based Comprehensive Framework for the Assessment of the Security of Multi-Modal Transport Systems

Authors: Mireille Elhajj, Washington Ochieng, Deeph Chana

Abstract:

The challenges of the rapid growth in the demand for transport has traditionally been seen within the context of the problems of congestion, air quality, climate change, safety, and affordability. However, there are increasing threats including those related to crime such as cyber-attacks that threaten the security of the transport of people and goods. To the best of the authors’ knowledge, this paper presents for the first time, a comprehensive framework for the assessment of the current and future security issues of multi-modal transport systems. The approach or method proposed is based on a structured framework starting with a detailed specification of the transport asset map (transport system architecture), followed by the identification of vulnerabilities. The asset map and vulnerabilities are used to identify the various approaches for exploitation of the vulnerabilities, leading to the creation of a set of threat scenarios. The threat scenarios are then transformed into risks and their categories, and include insights for their mitigation. The consideration of the mitigation space is holistic and includes the formulation of appropriate policies and tactics and/or technical interventions. The quality of the framework is ensured through a structured and logical process that identifies the stakeholders, reviews the relevant documents including policies and identifies gaps, incorporates targeted surveys to augment the reviews, and uses subject matter experts for validation. The approach to categorising security risks is an extension of the current methods that are typically employed. Specifically, the partitioning of risks into either physical or cyber categories is too limited for developing mitigation policies and tactics/interventions for transport systems where an interplay between physical and cyber processes is very often the norm. This interplay is rapidly taking on increasing significance for security as the emergence of cyber-physical technologies, are shaping the future of all transport modes. Examples include: Connected Autonomous Vehicles (CAVs) in road transport; the European Rail Traffic Management System (ERTMS) in rail transport; Automatic Identification System (AIS) in maritime transport; advanced Communications, Navigation and Surveillance (CNS) technologies in air transport; and the Internet of Things (IoT). The framework adopts a risk categorisation scheme that considers risks as falling within the following threat→impact relationships: Physical→Physical, Cyber→Cyber, Cyber→Physical, and Physical→Cyber). Thus the framework enables a more complete risk picture to be developed for today’s transport systems and, more importantly, is readily extendable to account for emerging trends in the sector that will define future transport systems. The framework facilitates the audit and retro-fitting of mitigations in current transport operations and the analysis of security management options for the next generation of Transport enabling strategic aspirations such as systems with security-by-design and co-design of safety and security to be achieved. An initial application of the framework to transport systems has shown that intra-modal consideration of security measures is sub-optimal and that a holistic and multi-modal approach that also addresses the intersections/transition points of such networks is required as their vulnerability is high. This is in-line with traveler-centric transport service provision, widely accepted as the future of mobility services. In summary, a risk-based framework is proposed for use by the stakeholders to comprehensively and holistically assess the security of transport systems. It requires a detailed understanding of the transport architecture to enable a detailed vulnerabilities analysis to be undertaken, creates threat scenarios and transforms them into risks which form the basis for the formulation of interventions.

Keywords: mitigations, risk, transport, security, vulnerabilities

Procedia PDF Downloads 165
55 Transitioning towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges

Authors: Mozhdeh Khalili Kordabadi

Abstract:

Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. By adopting these strategies, the textile industry can contribute to a more sustainable and environmentally friendly future. Introduction: Textiles, particularly clothing, are essential to human existence. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion: The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.

Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension.

Procedia PDF Downloads 96
54 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator

Authors: Kaushikk Iyer, Richard M Hall, David Keeling

Abstract:

Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.

Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator

Procedia PDF Downloads 168
53 Robust Decision Support Framework for Addressing Uncertainties in Water Resources Management in the Mekong

Authors: Chusit Apirumanekul, Chayanis Krittasudthacheewa, Ratchapat Ratanavaraha, Yanyong Inmuong

Abstract:

Rapid economic development in the Lower Mekong region is leading to changes in water quantity and quality. Changes in land- and forest-use, infrastructure development, increasing urbanization, migration patterns and climate risks are increasing demands for water, within various sectors, placing pressure on scarce water resources. Appropriate policies, strategies, and planning are urgently needed for improved water resource management. Over the last decade, Thailand has experienced more frequent and intense drought situations, affecting the level of water storage in reservoirs along with insufficient water allocation for agriculture during the dry season. The Huay Saibat River Basin, one of the well-known water-scarce areas in the northeastern region of Thailand, is experiencing ongoing water scarcity that affects both farming livelihoods and household consumption. Drought management in Thailand mainly focuses on emergency responses, rather than advance preparation and mitigation for long-term solutions. Despite many efforts from local authorities to mitigate the drought situation, there is yet no long-term comprehensive water management strategy, that integrates climate risks alongside other uncertainties. This paper assesses the application in the Huay Saibat River Basin, of the Robust Decision Support framework, to explore the feasibility of multiple drought management policies; including a shift in cropping season, in crop changes, in infrastructural operations and in the use of groundwater, under a wide range of uncertainties, including climate and land-use change. A series of consultative meetings were organized with relevant agencies and experts at the local level, to understand and explore plausible water resources strategies and identify thresholds to evaluate the performance of those strategies. Three different climate conditions were identified (dry, normal and wet). Other non-climatic factors influencing water allocation were further identified, including changes from sugarcane to rubber, delaying rice planting, increasing natural retention storage and using groundwater to supply demands for household consumption and small-scale gardening. Water allocation and water use in various sectors, such as in agriculture, domestic, industry and the environment, were estimated by utilising the Water Evaluation And Planning (WEAP) system, under various scenarios developed from the combination of climatic and non-climatic factors mentioned earlier. Water coverage (i.e. percentage of water demand being successfully supplied) was defined as a threshold for water resource strategy assessment. Thresholds for different sectors (agriculture, domestic, industry, and environment) were specified during multi-stakeholder engagements. Plausible water strategies (e.g. increasing natural retention storage, change of crop type and use of groundwater as an alternative source) were evaluated based on specified thresholds in 4 sectors (agriculture, domestic, industry, and environment) under 3 climate conditions. 'Business as usual' was evaluated for comparison. The strategies considered robust, emerge when performance is assessed as successful, under a wide range of uncertainties across the river basin. Without adopting any strategy, the water scarcity situation is likely to escalate in the future. Among the strategies identified, the use of groundwater as an alternative source was considered a potential option in combating water scarcity for the basin. Further studies are needed to explore the feasibility for groundwater use as a potential sustainable source.

Keywords: climate change, robust decision support, scenarios, water resources management

Procedia PDF Downloads 168
52 Satellite Connectivity for Sustainable Mobility

Authors: Roberta Mugellesi Dow

Abstract:

As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.

Keywords: sustainability, connectivity, mobility, satellites

Procedia PDF Downloads 133
51 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 59
50 Social and Political Economy of Paid and Unpaid Work: Work of Women Home Based Workers in National Capital Region (NCR), India

Authors: Sudeshna Sengupta

Abstract:

Women’s work lives weave a complex fabric of myriad work relations and complex structures. Lives, when seen from the lens of work, is a saga of conjugated oppression by intertwined structures that are vertically and horizontally interwoven in a very complex manner. Women interact with multiple institutions through their work. The interactions and interplay of institutions shape their organization of work. They intersperse productive work with reproductive work, unpaid economic activities with unpaid care work, and all kinds of activities with leisure and self-care. The proposed paper intends to understand how women working as home-based workers in the National Capital Region (NCR) of India are organizing their everyday work, and how the organization of work is influenced by the interplay of structures. Situating itself in a multidisciplinary theoretical framework, this paper brings out how the gendering of work is playing out in the political, economic and social domain and shaping the work-life within the family, and in the paid workspace. The paper will use a primary data source, which is qualitative in nature. It will comprise 15 qualitative interviews of women home-based workers from the National Capital Region. The research uses a life history approach. The sampling was purposive using snowballing as a method. The dataset is part of the primary data (qualitative) collected for the ongoing Ph.D. work in Gender Studies at Ambedkar University Delhi. The home-based workers interviewed were in “non-factory” wage relations based on piece rates with flexible working hours. Their workplaces were their own homes with no spatial divide between living spaces and workspaces. Home-based workers were recognized as a group in the domain of labor economics in the 1980s. When menial work was cheaper than machine work, the capital owners preferred to outsource work as home-based work to women. These production spaces are fragmented and the identity of gender is created within labor processes to favor material accumulation. Both the employers and employees acknowledged the material gain of the capital owner when work was subcontracted to women at home. Simultaneously the market reinforced women’s reproductive role by conforming to patriarchal ideology. The contractors played an important role in implementing localized control on workers and also in finding workers for fragmented, gendered production processes. Their presence helped the employers in bringing together multiple forms of oppression that ranged from creating a structure to flout laws by creating shadow employers. It created an intertwined social and economic structure as well as a workspace where the line between productive and reproductive work gets blurred. The state invisibilized itself either by keeping the sector out of the domain of laws or by not implementing its own laws regulating working conditions or social security. It allowed the local hierarchy to function and define localized working conditions. The productive reproductive continuum reveals a labor control that influenced both the productive and reproductive work of women.

Keywords: informal sector, paid work, women workers, labor processes

Procedia PDF Downloads 160
49 Metal-Organic Frameworks-Based Materials for Volatile Organic Compounds Sensing Applications: Strategies to Improve Sensing Performances

Authors: Claudio Clemente, Valentina Gargiulo, Alessio Occhicone, Giovanni Piero Pepe, Giovanni Ausanio, Michela Alfè

Abstract:

Volatile organic compound (VOC) emissions represent a serious risk to human health and the integrity of the ecosystems, especially at high concentrations. For this reason, it is very important to continuously monitor environmental quality and develop fast and reliable portable sensors to allow analysis on site. Chemiresistors have become promising candidates for VOC sensing as their ease of fabrication, variety of suitable sensitive materials, and simple sensing data. A chemoresistive gas sensor is a transducer that allows to measure the concentration of an analyte in the gas phase because the changes in resistance are proportional to the amount of the analyte present. The selection of the sensitive material, which interacts with the target analyte, is very important for the sensor performance. The most used VOC detection materials are metal oxides (MOx) for their rapid recovery, high sensitivity to various gas molecules, easy fabrication. Their sensing performance can be improved in terms of operating temperature, selectivity, and detection limit. Metal-organic frameworks (MOFs) have attracted a lot of attention also in the field of gas sensing due to their high porosity, high surface area, tunable morphologies, structural variety. MOFs are generated by the self-assembly of multidentate organic ligands connecting with adjacent multivalent metal nodes via strong coordination interactions, producing stable and highly ordered crystalline porous materials with well-designed structures. However, most MOFs intrinsically exhibit low electrical conductivity. To improve this property, MOFs can be combined with organic and inorganic materials in a hybrid fashion to produce composite materials or can be transformed into more stable structures. MOFs, indeed, can be employed as the precursors of metal oxides with well-designed architectures via the calcination method. The MOF-derived MOx partially preserved the original structure with high surface area and intrinsic open pores, which act as trapping centers for gas molecules, and showed a higher electrical conductivity. Core-shell heterostructures, in which the surface of a metal oxide core is completely coated by a MOF shell, forming a junction at the core-shell heterointerface, can also be synthesized. Also, nanocomposite in which MOF structures are intercalated with graphene related materials can also be produced, and the conductivity increases thanks to the high mobility of electrons of carbon materials. As MOF structures, zinc-based MOFs belonging to the ZIF family were selected in this work. Several Zn-based materials based and/or derived from MOFs were produced, structurally characterized, and arranged in a chemo resistive architecture, also exploring the potentiality of different approaches of sensing layer deposition based on PLD (pulsed laser deposition) and, in case of thermally labile materials, MAPLE (Matrix Assisted Pulsed Laser Evaporation) to enhance the adhesion to the support. The sensors were tested in a controlled humidity chamber, allowing for the possibility of varying the concentration of ethanol, a typical analyte chosen among the VOCs for a first survey. The effect of heating the chemiresistor to improve sensing performances was also explored. Future research will focus on exploring new manufacturing processes for MOF-based gas sensors with the aim to improve sensitivity, selectivity and reduce operating temperatures.

Keywords: chemiresistors, gas sensors, graphene related materials, laser deposition, MAPLE, metal-organic frameworks, metal oxides, nanocomposites, sensing performance, transduction mechanism, volatile organic compounds

Procedia PDF Downloads 60
48 Cycleloop Personal Rapid Transit: An Exploratory Study for Last Mile Connectivity in Urban Transport

Authors: Suresh Salla

Abstract:

In this paper, author explores for most sustainable last mile transport mode addressing present problems of traffic congestion, jams, pollution and travel stress. Development of energy-efficient sustainable integrated transport system(s) is/are must to make our cities more livable. Emphasis on autonomous, connected, electric, sharing system for effective utilization of systems (vehicles and public infrastructure) is on the rise. Many surface mobility innovations like PBS, Ride hailing, ride sharing, etc. are, although workable but if we analyze holistically, add to the already congested roads, difficult to ride in hostile weather, causes pollution and poses commuter stress. Sustainability of transportation is evaluated with respect to public adoption, average speed, energy consumption, and pollution. Why public prefer certain mode over others? How commute time plays a role in mode selection or shift? What are the factors play-ing role in energy consumption and pollution? Based on the study, it is clear that public prefer a transport mode which is exhaustive (i.e., less need for interchange – network is widespread) and intensive (i.e., less waiting time - vehicles are available at frequent intervals) and convenient with latest technologies. Average speed is dependent on stops, number of intersections, signals, clear route availability, etc. It is clear from Physics that higher the kerb weight of a vehicle; higher is the operational energy consumption. Higher kerb weight also demands heavier infrastructure. Pollution is dependent on source of energy, efficiency of vehicle, average speed. Mode can be made exhaustive when the unit infrastructure cost is less and can be offered intensively when the vehicle cost is less. Reliable and seamless integrated mobility till last ¼ mile (Five Minute Walk-FMW) is a must to encourage sustainable public transportation. Study shows that average speed and reliability of dedicated modes (like Metro, PRT, BRT, etc.) is high compared to road vehicles. Electric vehicles and more so battery-less or 3rd rail vehicles reduce pollution. One potential mode can be Cycleloop PRT, where commuter rides e-cycle in a dedicated path – elevated, at grade or underground. e-Bike with kerb weight per rider at 15 kg being 1/50th of car or 1/10th of other PRT systems makes it sustainable mode. Cycleloop tube will be light, sleek and scalable and can be modular erected, either on modified street lamp-posts or can be hanged/suspended between the two stations. Embarking and dis-embarking points or offline stations can be at an interval which suits FMW to mass public transit. In terms of convenience, guided e-Bike can be made self-balancing thus encouraging driverless on-demand vehicles. e-Bike equipped with smart electronics and drive controls can intelligently respond to field sensors and autonomously move reacting to Central Controller. Smart switching allows travel from origin to destination without interchange of cycles. DC Powered Batteryless e-cycle with voluntary manual pedaling makes it sustainable and provides health benefits. Tandem e-bike, smart switching and Platoon operations algorithm options provide superior through-put of the Cycleloop. Thus Cycleloop PRT will be exhaustive, intensive, convenient, reliable, speedy, sustainable, safe, pollution-free and healthy alternative mode for last mile connectivity in cities.

Keywords: cycleloop PRT, five-minute walk, lean modular infrastructure, self-balanced intelligent e-cycle

Procedia PDF Downloads 131
47 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 114
46 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 75
45 Cell-free Bioconversion of n-Octane to n-Octanol via a Heterogeneous and Bio-Catalytic Approach

Authors: Shanna Swart, Caryn Fenner, Athanasios Kotsiopoulos, Susan Harrison

Abstract:

Linear alkanes are produced as by-products from the increasing use of gas-to-liquid fuel technologies for synthetic fuel production and offer great potential for value addition. Their current use as low-value fuels and solvents do not maximize this potential. Therefore, attention has been drawn towards direct activation of these aliphatic alkanes to more useful products such as alcohols, aldehydes, carboxylic acids and derivatives. Cytochrome P450 monooxygenases (P450s) can be used for activation of these aliphatic alkanes using whole-cells or cell-free systems. Some limitations of whole-cell systems include reduced mass transfer, stability and possible side reactions. Since the P450 systems are little studied as cell-free systems, they form the focus of this study. Challenges of a cell-free system include co-factor regeneration, substrate availability and enzyme stability. Enzyme immobilization offers a positive outlook on this dilemma, as it may enhance stability of the enzyme. In the present study, 2 different P450s (CYP153A6 and CYP102A1) as well as the relevant accessory enzymes required for electron transfer (ferredoxin and ferredoxin reductase) and co-factor regeneration (glucose dehydrogenase) have been expressed in E. coli and purified by metal affinity chromatography. Glucose dehydrogenase (GDH), was used as a model enzyme to assess the potential of various enzyme immobilization strategies including; surface attachment on MagReSyn® microspheres with various functionalities and on electrospun nanofibers, using self-assembly based methods forming Cross Linked Enzymes (CLE), Cross Linked Enzyme Aggregates (CLEAs) and spherezymes as well as in a sol gel. The nanofibers were synthesized by electrospinning, which required the building of an electrospinning machine. The nanofiber morphology has been analyzed by SEM and binding will be further verified by FT-IR. Covalent attachment based methods showed limitations where only ferredoxin reductase and GDH retained activity after immobilization which were largely attributed to insufficient electron transfer and inactivation caused by the crosslinkers (60% and 90% relative activity loss for the free enzyme when using 0.5% glutaraldehyde and glutaraldehyde/ethylenediamine (1:1 v/v), respectively). So far, initial experiments with GDH have shown the most potential when immobilized via their His-tag onto the surface of MagReSyn® microspheres functionalized with Ni-NTA. It was found that Crude GDH could be simultaneously purified and immobilized with sufficient activity retention. Immobilized pure and crude GDH could be recycled 9 and 10 times, respectively, with approximately 10% activity remaining. The immobilized GDH was also more stable than the free enzyme after storage for 14 days at 4˚C. This immobilization strategy will also be applied to the P450s and optimized with regards to enzyme loading and immobilization time, as well as characterized and compared with the free enzymes. It is anticipated that the proposed immobilization set-up will offer enhanced enzyme stability (as well as reusability and easy recovery), minimal mass transfer limitation, with continuous co-factor regeneration and minimal enzyme leaching. All of which provide a positive outlook on this robust multi-enzyme system for efficient activation of linear alkanes as well as the potential for immobilization of various multiple enzymes, including multimeric enzymes for different bio-catalytic applications beyond alkane activation.

Keywords: alkane activation, cytochrome P450 monooxygenase, enzyme catalysis, enzyme immobilization

Procedia PDF Downloads 224
44 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 275
43 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 233
42 Development and Evaluation of a Cognitive Behavioural Therapy Based Smartphone App for Low Moods and Anxiety

Authors: David Bakker, Nikki Rickard

Abstract:

Smartphone apps hold immense potential as mental health and wellbeing tools. Support can be made easily accessible and can be used in real-time while users are experiencing distress. Furthermore, data can be collected to enable machine learning and automated tailoring of support to users. While many apps have been developed for mental health purposes, few have adhered to evidence-based recommendations and even fewer have pursued experimental validation. This paper details the development and experimental evaluation of an app, MoodMission, that aims to provide support for low moods and anxiety, help prevent clinical depression and anxiety disorders, and serve as an adjunct to professional clinical supports. MoodMission was designed to deliver cognitive behavioural therapy for specifically reported problems in real-time, momentary interactions. Users report their low moods or anxious feelings to the app along with a subjective units of distress scale (SUDS) rating. MoodMission then provides a choice of 5-10 short, evidence-based mental health strategies called Missions. Users choose a Mission, complete it, and report their distress again. Automated tailoring, gamification, and in-built data collection for analysis of effectiveness was also included in the app’s design. The development process involved construction of an evidence-based behavioural plan, designing of the app, building and testing procedures, feedback-informed changes, and a public launch. A randomized controlled trial (RCT) was conducted comparing MoodMission to two other apps and a waitlist control condition. Participants completed measures of anxiety, depression, well-being, emotional self-awareness, coping self-efficacy and mental health literacy at the start of their app use and 30 days later. At the time of submission (November 2016) over 300 participants have participated in the RCT. Data analysis will begin in January 2017. At the time of this submission, MoodMission has over 4000 users. A repeated-measures ANOVA of 1390 completed Missions reveals that SUDS (0-10) ratings were significantly reduced between pre-Mission ratings (M=6.20, SD=2.39) and post-Mission ratings (M=4.93, SD=2.25), F(1,1389)=585.86, p < .001, np2=.30. This effect was consistent across both low moods and anxiety. Preliminary analyses of the data from the outcome measures surveys reveal improvements across mental health and wellbeing measures as a result of using the app over 30 days. This includes a significant increase in coping self-efficacy, F(1,22)=5.91, p=.024, np2=.21. Complete results from the RCT in which MoodMission was evaluated will be presented. Results will also be presented from the continuous outcome data being recorded by MoodMission. MoodMission was successfully developed and launched, and preliminary analysis suggest that it is an effective mental health and wellbeing tool. In addition to the clinical applications of MoodMission, the app holds promise as a research tool to conduct component analysis of psychological therapies and overcome restraints of laboratory based studies. The support provided by the app is discrete, tailored, evidence-based, and transcends barriers of stigma, geographic isolation, financial limitations, and low health literacy.

Keywords: anxiety, app, CBT, cognitive behavioural therapy, depression, eHealth, mission, mobile, mood, MoodMission

Procedia PDF Downloads 271
41 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 69
40 Stakeholder Engagement to Address Urban Health Systems Gaps for Migrants

Authors: A. Chandra, M. Arthur, L. Mize, A. Pomeroy-Stevens

Abstract:

Background: Lower and middle-income countries (LMICs) in Asia face rapid urbanization resulting in both economic opportunities (the urban advantage) and emerging health challenges. Urban health risks are magnified in informal settlements and include infectious disease outbreaks, inadequate access to health services, and poor air quality. Over the coming years, urban spaces in Asia will face accelerating public health risks related to migration, climate change, and environmental health. These challenges are complex and require multi-sectoral and multi-stakeholder solutions. The Building Health Cities (BHC) program is funded by the United States Agency for International Development (USAID) to work with smart city initiatives in the Asia region. BHC approaches urban health challenges by addressing policies, planning, and services through a health equity lens, with a particular focus on informal settlements and migrant communities. The program works to develop data-driven decision-making, build inclusivity through stakeholder engagement, and facilitate the uptake of appropriate technology. Methodology: The BHC program has partnered with the smart city initiatives of Indore in India, Makassar in Indonesia, and Da Nang in Vietnam. Implementing partners support municipalities to improve health delivery and equity using two key approaches: political economy analysis and participatory systems mapping. Political economy analyses evaluate barriers to collective action, including corruption, security, accountability, and incentives. Systems mapping evaluates community health challenges using a cross-sectoral approach, analyzing the impact of economic, environmental, transport, security, health system, and built environment factors. The mapping exercise draws on the experience and expertise of a diverse cohort of stakeholders, including government officials, municipal service providers, and civil society organizations. Results: Systems mapping and political economy analyses identified significant barriers for health care in migrant populations. In Makassar, migrants are unable to obtain the necessary card that entitles them to subsidized health services. This finding is being used to engage with municipal governments to mitigate the barriers that limit migrant enrollment in the public social health insurance scheme. In Indore, the project identified poor drainage of storm and wastewater in migrant settlements as a cause of poor health. Unsafe and inadequate infrastructure placed residents of these settlements at risk for both waterborne diseases and injuries. The program also evaluated the capacity of urban primary health centers serving migrant communities, identifying challenges related to their hours of service and shortages of health workers. In Da Nang, the systems mapping process has only recently begun, with the formal partnership launched in December 2019. Conclusion: This paper explores lessons learned from BHC’s systems mapping, political economy analyses, and stakeholder engagement approaches. The paper shares progress related to the health of migrants in informal settlements. Case studies feature barriers identified and mitigating steps, including governance actions, taken by local stakeholders in partner cities. The paper includes an update on ongoing progress from Indore and Makassar and experience from the first six months of program implementation from Da Nang.

Keywords: informal settlements, migration, stakeholder engagement mapping, urban health

Procedia PDF Downloads 118
39 Flexural Response of Sandwiches with Micro Lattice Cores Manufactured via Selective Laser Sintering

Authors: Emre Kara, Ali Kurşun, Halil Aykul

Abstract:

The lightweight sandwiches obtained with the use of various core materials such as foams, honeycomb, lattice structures etc., which have high energy absorbing capacity and high strength to weight ratio, are suitable for several applications in transport industry (automotive, aerospace, shipbuilding industry) where saving of fuel consumption, load carrying capacity increase, safety of vehicles and decrease of emission of harmful gases are very important aspects. While the sandwich structures with foams and honeycombs have been applied for many years, there is a growing interest on a new generation sandwiches with micro lattice cores. In order to produce these core structures, various production methods were created with the development of the technology. One of these production technologies is an additive manufacturing technique called selective laser sintering/melting (SLS/SLM) which is very popular nowadays because of saving of production time and achieving the production of complex topologies. The static bending and the dynamic low velocity impact tests of the sandwiches with carbon fiber/epoxy skins and the micro lattice cores produced via SLS/SLM were already reported in just a few studies. The goal of this investigation was the analysis of the flexural response of the sandwiches consisting of glass fiber reinforced plastic (GFRP) skins and the micro lattice cores manufactured via SLS under thermo-mechanical loads in order to compare the results in terms of peak load and absorbed energy values respect to the effect of core cell size, temperature and support span length. The micro lattice cores were manufactured using SLS technology that creates the product drawn by a 3D computer aided design (CAD) software. The lattice cores which were designed as body centered cubic (BCC) model having two different cell sizes (d= 2 and 2.5 mm) with the strut diameter of 0.3 mm were produced using titanium alloy (Ti6Al4V) powder. During the production of all the core materials, the same production parameters such as laser power, laser beam diameter, building direction etc. were kept constant. Vacuum Infusion (VI) method was used to produce skin materials, made of [0°/90°] woven S-Glass prepreg laminates. The combination of the core and skins were implemented under VI. Three point bending tests were carried out by a servo-hydraulic test machine with different values of support span distances (L = 30, 45, and 60 mm) under various temperature values (T = 23, 40 and 60 °C) in order to analyze the influences of support span and temperature values. The failure mode of the collapsed sandwiches has been investigated using 3D computed tomography (CT) that allows a three-dimensional reconstruction of the analyzed object. The main results of the bending tests are: load-deflection curves, peak force and absorbed energy values. The results were compared according to the effect of cell size, support span and temperature values. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry, where problems of collision and crash have increased in the last years.

Keywords: light-weight sandwich structures, micro lattice cores, selective laser sintering, transport application

Procedia PDF Downloads 339
38 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 106
37 Development of Chitosan/Dextran Gelatin Methacrylate Core/Shell 3D Scaffolds and Protein/Polycaprolactone Melt Electrowriting Meshes for Tissue Regeneration Applications

Authors: J. D. Cabral, E. Murray, P. Turner, E. Hewitt, A. Ali, M. McConnell

Abstract:

Worldwide demand for organ replacement and tissue regeneration is progressively increasing. Three-dimensional (3D) bioprinting, where a physical construct is produced using computer-aided design, is a promising tool to advance the tissue engineering and regenerative medicine fields. In this paper we describe two different approaches to developing 3D bioprinted constructs for use in tissue regeneration. Bioink development is critical in achieving the 3D biofabrication of functional, regenerative tissues. Hydrogels, cross-linked macromolecules that absorb large amounts of water, have received widespread interest as bioinks due to their relevant soft tissue mechanics, biocompatibility, and tunability. In turn, not only is bioink optimisation crucial, but the creation of vascularized tissues remains a key challenge for the successful fabrication of thicker, more clinically relevant bioengineered tissues. Among the various methodologies, cell-laden hydrogels are regarded as a favorable approach; and when combined with novel core/shell 3D bioprinting technology, an innovative strategy towards creating new vessel-like structures. In this work, we investigate this cell-based approach by using human umbilical endothelial cells (HUVECs) entrapped in a viscoelastic chitosan/dextran (CD)-based core hydrogel, printed simulataneously along with a gelatin methacrylate (GelMA) shell. We have expanded beyond our previously reported FDA approved, commercialised, post-surgical CD hydrogel, Chitogel®, by functionalizing it with cell adhesion and proteolytic peptides in order to promote bone marrow-derived mesenchymal stem cell (immortalized BMSC cell line, hTERT) and HUVECs growth. The biocompatibility and biodegradability of these cell lines in a 3D bioprinted construct is demonstrated. Our studies show that particular peptide combinations crosslinked within the CD hydrogel was found to increase in vitro growth of BMSCs and HUVECs by more than two-fold. These gels were then used as a core bioink combined with the more mechanically robust, UV irradiated GelMA shell bioink, to create 3D regenerative, vessel-like scaffolds with high print fidelity. As well, microporous MEW scaffolds made from milk proteins blended with PCL were found to show promising bioactivity, exhibiting a significant increase in keratinocyte (HaCaTs) and fibroblast (normal human dermal fibroblasts, NhDFs) cell migration and proliferation when compared to PCL only scaffolds. In conclusion, our studies indicate that a peptide functionalized CD hydrogel bioink reinforced with a GelMA shell is biocompatible, biodegradable, and an appropriate cell delivery vehicle in the creation of regenerative 3D constructs. In addition, a novel 3D printing technique, melt electrowriting (MEW), which allows fabrication of micrometer fibre meshes, was used to 3D print polycaprolactone (PCL) and bioactive milk protein, lactorferrin (LF) and whey protein (WP), blended scaffolds for potential skin regeneration applications. MEW milk protein/PCL scaffolds exhibited high porosity characteristics, low overall biodegradation, and rapid protein release. Human fibroblasts and keratinocyte cells were seeded on to the scaffolds. Scaffolds containing high concentrations of LF and combined proteins (LF+WP) showed improved cell viability over time as compared to PCL only scaffolds. This research highlights two scaffolds made using two different 3D printing techniques using a combination of both natural and synthetic biomaterial components in order to create regenerative constructs as potential chronic wound treatments.

Keywords: biomaterials, hydrogels, regenerative medicine, 3D bioprinting

Procedia PDF Downloads 268
36 Influence of Bacterial Biofilm on the Corrosive Processes in Electronic Equipment

Authors: Iryna P. Dzieciuch, Michael D. Putman

Abstract:

Humidity is known to degrade Navy ship electronic equipment, especially in hot moist environments. If left untreated, it can cause significant and permanent damage. Even rigorous inspection and frequent clean-up would not prevent further equipment contamination and degradation because of the constant presence of favorable growth conditions for many microorganisms. Generally, relative humidity levels of less than 60% will inhibit corrosion in electronic equipment, but because NAVY electronics often operate in hot and humid environments, prevention via dehumidification is not always possible. Currently, there is no defined research that fully describes key mechanisms which cause electronics and its coating degradation. The corrosive action of most bacteria is mainly developed through (i) mycelium adherence to the metal plates, (ii) facilitation the formation of pitting areas, (iii) production of organic acids such as citric, iso-citric, cis-aconitic, alpha-ketoglutaric, which are corrosive to electronic equipment and its components. Our approach studies corrosive action in electronic equipment: circuit-board, wires and connections that are exposed in the humid environment that gets worse during condensation. In our new approach the technical task is built on work with the bacterial communities in public areas, bacterial genetics, bioinformatics, biostatistics and Scanning Electron Microscopy (SEM) of corroded circuit boards. Based on these methods, we collect and examine environmental samples from biofilms of the corroded and non-corroded sites, where bacterial contamination of electronic equipment, such as machine racks and shore boats, is an ongoing concern. Sample collection and sample analysis is focused on addressing the key questions identified above through the following tasks: laboratory sample processing and evaluation under scanning electron microscopy, initial sequencing and data evaluation; bioinformatics and data analysis. Preliminary results from scanning electron microscopy (SEM) have revealed that metal particulates and alloys in corroded samples consists mostly of Tin ( < 40%), Silicon ( < 4%), Sulfur ( < 1%), Aluminum ( < 2%), Magnesium ( < 2%), Copper ( < 1%), Bromine ( < 2%), Barium ( <1%) and Iron ( < 2%) elements. We have also performed X 12000 magnification of the same sites and that proved existence of undisrupted biofilm organelles and crystal structures. Non-corrosion sites have revealed high presence of copper ( < 47%); other metals remain at the comparable level as on the samples with corrosion. We have performed X 1000 magnification on the non-corroded at the sites and have documented formation of copper crystals. The next step of this study, is to perform metagenomics sequencing at all sites and to compare bacterial composition present in the environment. While copper is nontoxic to the living organisms, the process of bacterial adhesion creates acidic environment by releasing citric, iso-citric, cis-aconitic, alpha-ketoglutaric acidics, which in turn release copper ions Cu++, which that are highly toxic to the bacteria and higher order living organisms. This phenomenon, might explain natural “antibiotic” properties that are lacking in elements such as tin. To prove or deny this hypothesis we will use next - generation sequencing (NGS) methods to investigate types and growth cycles of bacteria that from bacterial biofilm the on corrosive and non-corrosive samples.

Keywords: bacteria, biofilm, circuit board, copper, corrosion, electronic equipment, organic acids, tin

Procedia PDF Downloads 160
35 A Regional Comparison of Hunter and Harvest Trends of Sika Deer (Cervus n. nippon) and Wild Boar (Sus s. leucomystax) in Japan from 1990 to 2013

Authors: Arthur Müller

Abstract:

The study treats human dimensions of hunting by conducting statistical data analysis and providing decision-making support by examples of good prefectural governance and successful wildlife management, crucial to reduce pest species and sustain a stable hunter population in the future. Therefore it analyzes recent revision of wildlife legislation, reveals differences in administrative management structures, as well as socio-demographic characteristics of hunters in correlation with harvest trends of sika deer and wild boar in 47 prefectures in Japan between 1990 and 2013. In a wider context, Japan’s decentralized license hunting system might take the potential future role of a regional pioneer in East Asia. Consequently, the study contributes to similar issues in premature hunting systems of South Korea and Taiwan. Firstly, a quantitative comparison of seven mainland regions was conducted in Hokkaido, Tohoku, Kanto, Chubu, Kinki, Chugoku, and Kyushu. Example prefectures were chosen by a cluster analysis. Shifts, differences, mean values and exponential growth rates between trap and gun hunters, age classes and common occupation types of hunters were statistically exterminated. While western Japan is characterized by high numbers of aged trap-hunters, occupied in agricultural- and forestry, the north-eastern prefectures show higher relative numbers of younger gun-hunters occupied in the field of production and process workers. With the exception of Okinawa island, most hunters in all prefectures are 60 years and older. Hence, unemployed and retired hunters are the fastest growing occupation group. Despite to drastic decrease in hunter population in absolute numbers, Hunting Recruitment Index indicated that all age classes tend to continue their hunting activity over a longer period, above ten years from 2004 to 2013 than during the former decade. Associated with a rapid population increase and distribution of sika deer and wild boar since 1978, a number of harvest from hunting and culling also have been rapidly increasing. Both wild boar hunting and culling is particularly high in western Japan, while sika hunting and culling proofs most successful in Hokkaido, central and western Japan. Since the Wildlife Protection and Proper Hunting Act in 1999 distinct prefectural hunting management authorities with different power, sets apply management approaches under the principles of subsidiarity and guidelines of the Ministry of Environment. Additionally, the Act on Special Measures for Prevention of Damage Related to Agriculture, Forestry, and Fisheries Caused by Wildlife from 2008 supports local hunters in damage prevention measures through subsidies by the Ministry of Agriculture and Forestry, which caused a rise of trap hunting, especially in western Japan. Secondly, prefectural staff in charge of wildlife management in seven regions was contacted. In summary, Hokkaido serves as a role model for dynamic, integrative, adaptive “feedback” management of Ezo sika deer, as well as a diverse network between management organizations, while Hyogo takes active measures to trap-hunt wild boars effectively. Both prefectures take the leadership in institutional performance and capacity. Northern prefectures in Tohoku, Chubu and Kanto region, firstly confronted with the emergence of wild boars and rising sika deer numbers, demand new institution and capacity building, as well as organizational learning.

Keywords: hunting and culling harvest trends, hunter socio-demographics, regional comparison, wildlife management approach

Procedia PDF Downloads 279
34 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 188
33 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 132
32 A Multiple Freezing/Thawing Cycles Influence Internal Structure and Mechanical Properties of Achilles Tendon

Authors: Martyna Ekiert, Natalia Grzechnik, Joanna Karbowniczek, Urszula Stachewicz, Andrzej Mlyniec

Abstract:

Tendon grafting is a common procedure performed to treat tendon rupture. Before the surgical procedure, tissues intended for grafts (i.e., Achilles tendon) are stored in ultra-low temperatures for a long time and also may be subjected to unfavorable conditions, such as repetitive freezing (F) and thawing (T). Such storage protocols may highly influence the graft mechanical properties, decrease its functionality and thus increase the risk of complications during the transplant procedure. The literature reports on the influence of multiple F/T cycles on internal structure and mechanical properties of tendons stay inconclusive, confirming and denying the negative influence of multiple F/T at the same time. An inconsistent research methodology and lack of clear limit of F/T cycles, which disqualifies tissue for surgical graft purposes, encouraged us to investigate the issue of multiple F/T cycles by the mean of biomechanical tensile tests supported with Scanning Electron Microscope (SEM) imaging. The study was conducted on male bovine Achilles tendon-derived from the local abattoir. Fresh tendons were cleaned of excessive membranes and then sectioned to obtained fascicle bundles. Collected samples were randomly assigned to 6 groups subjected to 1, 2, 4, 6, 8 and 12 cycles of freezing-thawing (F/T), respectively. Each F/T cycle included deep freezing at -80°C temperature, followed by thawing at room temperature. After final thawing, thin slices of the side part of samples subjected to 1, 4, 8 and 12 F/T cycles were collected for SEM imaging. Then, the width and thickness of all samples were measured to calculate the cross-sectional area. Biomechanical tests were performed using the universal testing machine (model Instron 8872, INSTRON®, Norwood, Massachusetts, USA) using a load cell with a maximum capacity of 250 kN and standard atmospheric conditions. Both ends of each fascicle bundle were manually clamped in grasping clamps using abrasive paper and wet cellulose wadding swabs to prevent tissue slipping while clamping and testing. Samples were subjected to the testing procedure including pre-loading, pre-cycling, loading, holding and unloading steps to obtain stress-strain curves for representing tendon stretching and relaxation. The stiffness of AT fascicles bundle samples was evaluated in terms of modulus of elasticity (Young’s modulus), calculated from the slope of the linear region of stress-strain curves. SEM imaging was preceded by chemical sample preparation including 24hr fixation in 3% glutaraldehyde buffered with 0.1 M phosphate buffer, washing with 0.1 M phosphate buffer solution and dehydration in a graded ethanol solution. SEM images (Merlin Gemini II microscope, ZEISS®) were taken using 30 000x mag, which allowed measuring a diameter of collagen fibrils. The results confirm a decrease in fascicle bundles Young’s modulus as well as a decrease in the diameter of collagen fibrils. These results confirm the negative influence of multiple F/T cycles on the mechanical properties of tendon tissue.

Keywords: biomechanics, collagen, fascicle bundles, soft tissue

Procedia PDF Downloads 123
31 Non-Thermal Pulsed Plasma Discharge for Contaminants of Emerging Concern Removal in Water

Authors: Davide Palma, Dimitra Papagiannaki, Marco Minella, Manuel Lai, Rita Binetti, Claire Richard

Abstract:

Modern analytical technologies allow us to detect water contaminants at trace and ultra-trace concentrations highlighting how a large number of organic compounds is not efficiently abated by most wastewater treatment facilities relying on biological processes; we usually refer to these micropollutants as contaminants of emerging concern (CECs). The availability of reliable end effective technologies, able to guarantee the high standards of water quality demanded by legislators worldwide, has therefore become a primary need. In this context, water plasma stands out among developing technologies as it is extremely effective in the abatement of numerous classes of pollutants, cost-effective, and environmentally friendly. In this work, a custom-built non-thermal pulsed plasma discharge generator was used to abate the concentration of selected CECs in the water samples. Samples were treated in a 50 mL pyrex reactor using two different types of plasma discharge occurring at the surface of the treated solution or, underwater, working with positive polarity. The distance between the tips of the electrodes determined where the discharge was formed: underwater when the distance was < 2mm, at the water surface when the distance was > 2 mm. Peak voltage was in the 100-130kV range with typical current values of 20-40 A. The duration of the pulse was 500 ns, and the frequency of discharge could be manually set between 5 and 45 Hz. Treatment of 100 µM diclofenac solution in MilliQ water, with a pulse frequency of 17Hz, revealed that surface discharge was more efficient in the degradation of diclofenac that was no longer detectable after 6 minutes of treatment. Over 30 minutes were required to obtain the same results with underwater discharge. These results are justified by the higher rate of H₂O₂ formation (21.80 µmolL⁻¹min⁻¹ for surface discharge against 1.20 µmolL⁻¹min⁻¹ for underwater discharge), larger discharge volume and UV light emission, high rate of ozone and NOx production (up to 800 and 1400 ppb respectively) observed when working with surface discharge. Then, the surface discharge was used for the treatment of the three selected perfluoroalkyl compounds, namely, perfluorooctanoic acid (PFOA), perfluorohexanoic acid (PFHxA), and pefluorooctanesulfonic acid (PFOS) both individually and in mixture, in ultrapure and groundwater matrices with initial concentration of 1 ppb. In both matrices, PFOS exhibited the best degradation reaching complete removal after 30 min of treatment (degradation rate 0.107 min⁻¹ in ultrapure water and 0.0633 min⁻¹ in groundwater), while the degradation rate of PFOA and PFHxA was slower of around 65% and 80%, respectively. Total nitrogen (TN) measurements revealed levels up to 45 mgL⁻¹h⁻¹ in water samples treated with surface discharge, while, in analogous samples treated with underwater discharge, TN increase was 5 to 10 times lower. These results can be explained by the significant NOx concentrations (over 1400 ppb) measured above functioning reactor operating with superficial discharge; rapid NOx hydrolysis led to nitrates accumulation in the solution explaining the observed evolution of TN values. Ionic chromatography measures confirmed that the vast majority of TN was under the form of nitrates. In conclusion, non-thermal pulsed plasma discharge, obtained with a custom-built generator, was proven to effectively degrade diclofenac in water matrices confirming the potential interest of this technology for wastewater treatment. The surface discharge was proven to be more effective in CECs removal due to the high rate of formation of H₂O₂, ozone, reactive radical species, and strong UV light emission. Furthermore, nitrates enriched water obtained after treatment could be an interesting added-value product to be used as fertilizer in agriculture. Acknowledgment: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765860.

Keywords: CECs removal, nitrogen fixation, non-thermal plasma, water treatment

Procedia PDF Downloads 119
30 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 80
29 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam

Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski

Abstract:

The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).

Keywords: waste water treatement, RVC, electrocatalysis, paracetamol

Procedia PDF Downloads 86
28 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease

Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang

Abstract:

Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.

Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation

Procedia PDF Downloads 73
27 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework

Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard

Abstract:

Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.

Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health

Procedia PDF Downloads 139