Search results for: explicit representation of solutions
57 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13856 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 6255 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe
Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira
Abstract:
Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust
Procedia PDF Downloads 26754 Culture and Health Equity: Unpacking the Sociocultural Determinants of Eye Health for Indigenous Australian Diabetics
Authors: Aryati Yashadhana, Ted Fields Jnr., Wendy Fernando, Kelvin Brown, Godfrey Blitner, Francis Hayes, Ruby Stanley, Brian Donnelly, Bridgette Jerrard, Anthea Burnett, Anthony B. Zwi
Abstract:
Indigenous Australians experience some of the worst health outcomes globally, with life expectancy being significantly poorer than those of non-Indigenous Australians. This is largely attributed to preventable diseases such as diabetes (prevalence 39% in Indigenous Australian adults > 55 years), which is attributed to a raised risk of diabetic visual impairment and cataract among Indigenous adults. Our study aims to explore the interface between structural and sociocultural determinants and human agency, in order to understand how they impact (1) accessibility of eye health and chronic disease services and (2) the potential for Indigenous patients to achieve positive clinical eye health outcomes. We used Participatory Action Research methods, and aimed to privilege the voices of Indigenous people through community collaboration. Semi-structured interviews (n=82) and patient focus groups (n=8) were conducted by Indigenous Community-Based Researchers (CBRs) with diabetic Indigenous adults (> 40 years) in four remote communities in Australia. Interviews (n=25) and focus groups (n=4) with primary health care clinicians in each community were also conducted. Data were audio recorded, transcribed verbatim, and analysed thematically using grounded theory, comparative analysis and Nvivo 10. Preliminary analysis occurred in tandem with data collection to determine theoretical saturation. The principal investigator (AY) led analysis sessions with CBRs, fostering cultural and contextual appropriateness to interpreting responses, knowledge exchange and capacity building. Identified themes were conceptualised into three spheres of influence: structural (health services, government), sociocultural (Indigenous cultural values, distrust of the health system, ongoing effects of colonialism and dispossession) and individual (health beliefs/perceptions, patient phenomenology). Permeating these spheres of influence were three core determinants: economic disadvantage, health literacy/education, and cultural marginalisation. These core determinants affected accessibility of services, and the potential for patients to achieve positive clinical outcomes at every level of care (primary, secondary, tertiary). Our findings highlight the clinical realities of institutionalised and structural inequities, illustrated through the lived experiences of Indigenous patients and primary care clinicians in the four sampled communities. The complex determinants surrounding inequity in health for Indigenous Australians, are entrenched through a longstanding experience of cultural discrimination and ostracism. Secure and long term funding of Aboriginal Community Controlled Health Services will be valuable, but are insufficient to address issues of inequity. Rather, working collaboratively with communities to build trust, and identify needs and solutions at the grassroots level, while leveraging community voices to drive change at the systemic/policy level are recommended.Keywords: indigenous, Australia, culture, public health, eye health, diabetes, social determinants of health, sociology, anthropology, health equity, aboriginal and Torres strait islander, primary care
Procedia PDF Downloads 30353 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 14352 Sustainability Framework for Water Management in New Zealand's Canterbury Region
Authors: Bryan Jenkins
Abstract:
Introduction: The expansion of irrigation in the Canterbury region has led to the sustainability limits being reached for water availability and the cumulative effects of land use intensification. The institutional framework under New Zealand’s Resource Management Act was found to be an inadequate basis for managing water at sustainability limits. An alternative paradigm for water management was developed based on collaborative governance and nested adaptive systems. This led to the formulation and implementation of the Canterbury Water Management Strategy. Methods: The nested adaptive system approach was adopted. Sustainability issues were identified at multiple spatial and time scales and defined potential failure pathways for the water resource system. These included biophysical and socio-economic issues such as water availability, cumulative effects on water quality due to land use intensification, projected changes in climate, public health, institutional arrangements, economic outcomes and externalities, and, social effects of changing technology. This led to the derivation of sustainability strategies to address these failure pathways. The collaborative governance approach involved stakeholder participation and community engagement to decide on a regional strategy; regional and zone committees of community and rūnanga (Māori groups) members to develop implementation programmes for the strategy; and, farmer collectives for operational management. Findings: The strategy identified improvements in the efficiency of use of water already allocated was more effective in improving water availability than a reliance on increased storage alone. New forms of storage with less adverse impacts were introduced, such as managed aquifer recharge and off-river storage. Reductions of nutrients from land use intensification by improving management practices has been a priority. Solutions packages for addressing the degradation of vulnerable lakes and rivers have been prepared. Biodiversity enhancement projects have been initiated. Greater involvement of Māori has led to the incorporation of kaitiakitanga (resource stewardship) into implementation programmes. Emerging issues are the need for improved integration of surface water and groundwater interactions, increased use of modelling of water and financial outcomes to guide decision making, and, equity in allocation among existing users as well as between existing and future users. Conclusions: However, sustainability analysis indicates that the proposed levels of management interventions are not sufficient to achieve community targets for water management. There is a need for more proactive recovery and rehabilitation measures. Managing to environmental limits is not sufficient, rather managing adaptive cycles is needed. Better measurement and management of water use efficiency is required. Proposed implementation packages are not sufficient to deliver desired water quality outcomes. Greater attention to targets important to environmental and recreational interests is needed to maintain trust in the collaborative process. Implementation programmes don’t adequately address climate change adaptations and greenhouse gas mitigation. Affordability is a constraint on adaptive capacity of farmers and communities. More funding mechanisms are required to implement proactive measures. The legislative and institutional framework needs to be changed to incorporate water framework legislation, regional sustainability strategies and water infrastructure coordination.Keywords: collaborative governance, irrigation management, nested adaptive systems, sustainable water management
Procedia PDF Downloads 15951 Biophilic Design Strategies: Four Case-Studies from Northern Europe
Authors: Carmen García Sánchez
Abstract:
The UN's 17 Sustainable Development Goals – specifically the nº 3 and nº 11- urgently call for new architectural design solutions at different design scales to increase human contact with nature in the health and wellbeing promotion of primarily urban communities. The discipline of Interior Design offers an important alternative to large-scale nature-inclusive actions which are not always possible due to space limitations. These circumstances provide an immense opportunity to integrate biophilic design, a complex emerging and under-developed approach that pursues sustainable design strategies for increasing the human-nature connection through the experience of the built environment. Biophilic design explores the diverse ways humans are inherently inclined to affiliate with nature, attach meaning to and derive benefit from the natural world. It represents a biological understanding of architecture which categorization is still in progress. The internationally renowned Danish domestic architecture built in the 1950´s and early 1960´s - a golden age of Danish modern architecture - left a leading legacy that has greatly influenced the domestic sphere and has further led the world in terms of good design and welfare. This study examines how four existing post-war domestic buildings establish a dialogue with nature and her variations over time. The case-studies unveil both memorable and unique biophilic resources through sophisticated and original design expressions, where transformative processes connect the users to the natural setting and reflect fundamental ways in which they attach meaning to the place. In addition, fascinating analogies in terms of this nature interaction with particular traditional Japanese architecture inform the research. They embody prevailing lessons for our time today. The research methodology is based on a thorough literature review combined with a phenomenological analysis into how these case-studies contribute to the connection between humans and nature, after conducting fieldwork throughout varying seasons to document understanding in nature transformations multi-sensory perception (via sight, touch, sound, smell, time and movement) as a core research strategy. The cases´ most outstanding features have been studied attending the following key parameters: 1. Space: 1.1. Relationships (itineraries); 1.2. Measures/scale; 2. Context: Context: Landscape reading in different weather/seasonal conditions; 3. Tectonic: 3.1. Constructive joints, elements assembly; 3.2. Structural order; 4. Materiality: 4.1. Finishes, 4.2. Colors; 4.3. Tactile qualities; 5. Daylight interplay. Departing from an artistic-scientific exploration this groundbreaking study provides sustainable practical design strategies, perspectives, and inspiration to boost humans´ contact with nature through the experience of the interior built environment. Some strategies are associated with access to outdoor space or require ample space, while others can thrive in a dense urban context without direct access to the natural environment. The objective is not only to produce knowledge, but to phase in biophilic design in the built environment, expanding its theory and practice into a new dimension. Its long-term vision is to efficiently enhance the health and well-being of urban communities through daily interaction with Nature.Keywords: sustainability, biophilic design, architectural design, interior design, nature, Danish architecture, Japanese architecture
Procedia PDF Downloads 10250 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops
Authors: Simon Komesker, Achim Wagner, Martin Ruskowski
Abstract:
In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.Keywords: holonic manufacturing system, modular production system, planning, and control, system structure
Procedia PDF Downloads 16949 Large Scale Method to Assess the Seismic Vulnerability of Heritage Buidings: Modal Updating of Numerical Models and Vulnerability Curves
Authors: Claire Limoge Schraen, Philippe Gueguen, Cedric Giry, Cedric Desprez, Frédéric Ragueneau
Abstract:
Mediterranean area is characterized by numerous monumental or vernacular masonry structures illustrating old ways of build and live. Those precious buildings are often poorly documented, present complex shapes and loadings, and are protected by the States, leading to legal constraints. This area also presents a moderate to high seismic activity. Even moderate earthquakes can be magnified by local site effects and cause collapse or significant damage. Moreover the structural resistance of masonry buildings, especially when less famous or located in rural zones has been generally lowered by many factors: poor maintenance, unsuitable restoration, ambient pollution, previous earthquakes. Recent earthquakes prove that any damage to these architectural witnesses to our past is irreversible, leading to the necessity of acting preventively. This means providing preventive assessments for hundreds of structures with no or few documents. In this context we want to propose a general method, based on hierarchized numerical models, to provide preliminary structural diagnoses at a regional scale, indicating whether more precise investigations and models are necessary for each building. To this aim, we adapt different tools, being developed such as photogrammetry or to be created such as a preprocessor starting from pictures to build meshes for a FEM software, in order to allow dynamic studies of the buildings of the panel. We made an inventory of 198 baroque chapels and churches situated in the French Alps. Then their structural characteristics have been determined thanks field surveys and the MicMac photogrammetric software. Using structural criteria, we determined eight types of churches and seven types for chapels. We studied their dynamical behavior thanks to CAST3M, using EC8 spectrum and accelerogramms of the studied zone. This allowed us quantifying the effect of the needed simplifications in the most sensitive zones and choosing the most effective ones. We also proposed threshold criteria based on the observed damages visible in the in situ surveys, old pictures and Italian code. They are relevant in linear models. To validate the structural types, we made a vibratory measures campaign using vibratory ambient noise and velocimeters. It also allowed us validating this method on old masonry and identifying the modal characteristics of 20 churches. Then we proceeded to a dynamic identification between numerical and experimental modes. So we updated the linear models thanks to material and geometrical parameters, often unknown because of the complexity of the structures and materials. The numerically optimized values have been verified thanks to the measures we made on the masonry components in situ and in laboratory. We are now working on non-linear models redistributing the strains. So we validate the damage threshold criteria which we use to compute the vulnerability curves of each defined structural type. Our actual results show a good correlation between experimental and numerical data, validating the final modeling simplifications and the global method. We now plan to use non-linear analysis in the critical zones in order to test reinforcement solutions.Keywords: heritage structures, masonry numerical modeling, seismic vulnerability assessment, vibratory measure
Procedia PDF Downloads 49348 Towards Achieving Total Decent Work: Occupational Safety and Health Issues, Problems and Concerns of Filipino Domestic Workers
Authors: Ronahlee Asuncion
Abstract:
The nature of their work and employment relationship make domestic workers easy prey to abuse, maltreatment, and exploitation. Considering their plight, this research was conceptualized and examined the: a) level of awareness of Filipino domestic workers on occupational safety and health (OSH); b) their issues/problems/concerns on OSH; c) their intervention strategies at work to address OSH related issues/problems/concerns; d) issues/problems/concerns of government, employers, and non-government organizations with regard to implementation of OSH to Filipino domestic workers; e) the role of government, employers and non-government organizations to help Filipino domestic workers address OSH related issues/problems/concerns; and f) the necessary policy amendments/initiatives/programs to address OSH related issues/problems/concerns of Filipino domestic workers. The study conducted a survey using non-probability sampling, two focus group discussions, two group interviews, and fourteen face-to-face interviews. These were further supplemented with an email correspondence to a key informant based in another country. Books, journals, magazines, and relevant websites further substantiated and enriched data of the research. Findings of the study point to the fact that domestic workers have low level of awareness on OSH because of poor information drive, fragmented implementation of the Domestic Workers Act, inactive campaign at the barangay level, weakened advocacy for domestic workers, absence of law on OSH for domestic workers, and generally low safety culture in the country among others. Filipino domestic workers suffer from insufficient rest, long hours of work, heavy workload, occupational stress, poor accommodation, insufficient hours of sleep, deprivation of day off, accidents and injuries such as cuts, burns, slipping, stumbling, electrical grounding, and fire, verbal, physical and sexual abuses, lack of medical assistance, none provision of personal protective equipment (PPE), absence of knowledge on the proper way of lifting, working at heights, and insufficient food provision. They also suffer from psychological problems because of separation from one’s family, limited mobility in the household where they work, injuries and accidents from using advanced home appliances and taking care of pets, low self-esteem, ergonomic problems, the need to adjust to all household members who have various needs and demands, inability to voice their complaints, drudgery of work, and emotional stress. With regard to illness or health problems, they commonly experience leg pains, back pains, and headaches. In the absence of intervention programs like those offered in the formal employment set up, domestic workers resort to praying, turn to family, relatives and friends for social and emotional support, connect with them through social media like Facebook which also serve as a means of entertainment to them, talk to their employer, and just try to be optimistic about their situation. Promoting OSH for domestic workers is very challenging and complicated because of interrelated factors such as cultural, knowledge, attitudinal, relational, social, resource, economic, political, institutional and legal problems. This complexity necessitates using a holistic and integrated approach as this is not a problem requiring simple solutions. With this recognition comes the full understanding that its success involves the action and cooperation of all duty bearers in attaining decent work for domestic workers.Keywords: decent work, Filipino domestic workers, occupational safety and health, working conditions
Procedia PDF Downloads 26247 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 22546 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters
Authors: Rahil Bahrami, Kaveh Ashenayi
Abstract:
This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion
Procedia PDF Downloads 10145 Enabling Wire Arc Additive Manufacturing in Aircraft Landing Gear Production and Its Benefits
Authors: Jun Wang, Chenglei Diao, Emanuele Pagone, Jialuo Ding, Stewart Williams
Abstract:
As a crucial component in aircraft, landing gear systems are responsible for supporting the plane during parking, taxiing, takeoff, and landing. Given the need for high load-bearing capacity over extended periods, 300M ultra-high strength steel (UHSS) is often the material of choice for crafting these systems due to its exceptional strength, toughness, and fatigue resistance. In the quest for cost-effective and sustainable manufacturing solutions, Wire Arc Additive Manufacturing (WAAM) emerges as a promising alternative for fabricating 300M UHSS landing gears. This is due to its advantages in near-net-shape forming of large components, cost-efficiency, and reduced lead times. Cranfield University has conducted an extensive preliminary study on WAAM 300M UHSS, covering feature deposition, interface analysis, and post-heat treatment. Both Gas Metal Arc (GMA) and Plasma Transferred Arc (PTA)-based WAAM methods were explored, revealing their feasibility for defect-free manufacturing. However, as-deposited 300M features showed lower strength but higher ductility compared to their forged counterparts. Subsequent post-heat treatments were effective in normalising the microstructure and mechanical properties, meeting qualification standards. A 300M UHSS landing gear demonstrator was successfully created using PTA-based WAAM, showcasing the method's precision and cost-effectiveness. The demonstrator, measuring Ф200mm x 700mm, was completed in 16 hours, using 7 kg of material at a deposition rate of 1.3kg/hr. This resulted in a significant reduction in the Buy-to-Fly (BTF) ratio compared to traditional manufacturing methods, further validating WAAM's potential for this application. A "cradle-to-gate" environmental impact assessment, which considers the cumulative effects from raw material extraction to customer shipment, has revealed promising outcomes. Utilising Wire Arc Additive Manufacturing (WAAM) for landing gear components significantly reduces the need for raw material extraction and refinement compared to traditional subtractive methods. This, in turn, lessens the burden on subsequent manufacturing processes, including heat treatment, machining, and transportation. Our estimates indicate that the carbon footprint of the component could be halved when switching from traditional machining to WAAM. Similar reductions are observed in embodied energy consumption and other environmental impact indicators, such as emissions to air, water, and land. Additionally, WAAM offers the unique advantage of part repair by redepositing only the necessary material, a capability not available through conventional methods. Our research shows that WAAM-based repairs can drastically reduce environmental impact, even when accounting for additional transportation for repairs. Consequently, WAAM emerges as a pivotal technology for reducing environmental impact in manufacturing, aiding the industry in its crucial and ambitious journey towards Net Zero. This study paves the way for transformative benefits across the aerospace industry, as we integrate manufacturing into a hybrid solution that offers substantial savings and access to more sustainable technologies for critical component production.Keywords: WAAM, aircraft landing gear, microstructure, mechanical performance, life cycle assessment
Procedia PDF Downloads 16144 Infrared Spectroscopy Fingerprinting of Herbal Products- Application of the Hypericum perforatum L. Supplements
Authors: Elena Iacob, Marie-Louise Ionescu, Elena Ionescu, Carmen Elena Tebrencu, Oana Teodora Ciuperca
Abstract:
Infrared spectroscopy (FT-IR) is an advanced technique frequently used to authenticate both raw materials and final products using their specific fingerprints and to determine plant extracts biomarkers based on their functional groups. In recent years the market for Hypericum has grown rapidly and also has grown the cases of adultery/replacement, especially for Hypericum perforatum L.specie. Presence/absence of same biomarkers provides preliminary identification of Hypericum species in safe use in the manufacture of food supplements. The main objective of the work was to characterize the main biomarkers of Hypericum perforatum L. (St. John's wort) and identify this species in herbal food supplements after specific FT-IR fingerprint. An experimental program has been designed in order to test: (1) raw material (St. John's wort); (2)intermediate raw materials (St. John's wort dry extract ); (3) the finished products: tablets based on powders, on extracts, on powder and extract, hydroalcoholic solution from herbal mixture based on St. John's wort. The analyze using FTIR infrared spectroscopy were obtained raw materials, intermediates and finished products spectra, respectively absorption bands corresponding and similar with aliphatic and aromatic structures; examination was done individually and through comparison between Hypericum perforatum L. plant species and finished product The tests were done in correlation with phytochemical markers for authenticating the specie Hypericum perforatum L.: hyperoside, rutin, quercetin, isoquercetin, luteolin, apigenin, hypericin, hyperforin, chlorogenic acid. Samples were analyzed using a Shimatzu FTIR spectrometer and the infrared spectrum of each sample was recorded in the MIR region, from 4000 to 1000 cm-1 and then the fingerprint region was selected for data analysis. The following functional groups were identified -stretching vibrations suggests existing groups in the compounds of interest (flavones–rutin, hyperoside, polyphenolcarboxilic acids - chlorogenic acid, naphtodianthrones- hypericin): oxidril groups (OH) free alcohol type: rutin, hyperoside, chlorogenic acid; C = O bond from structures with free carbonyl groups of aldehyde, ketone, carboxylic, ester: hypericin; C = O structure with the free carbonyl of the aldehyde groups, ketone, carboxylic acid, esteric/C = O free bonds present in chlorogenic acid; C = C bonds of the aromatic ring (condensed aromatic hydrocarbons, heterocyclic compounds) present in all compounds of interest; OH phenolic groups: present in all compounds of interest, C-O-C groups from glycoside structures: rutin, hyperoside, chlorogenic acid. The experimental results show that: (I)The six fingerprint region analysis indicated the presence of specific functional groups: (1) 1000 - 1130 cm-1 (C-O–C of glycoside structures); (2) 1200-1380 cm-1 (carbonyl C-O or O-H phenolic); (3) 1400-1450 cm-1 (C=C aromatic); (4) 1600- 1730 cm-1 (C=O carbonyl); (5) 2850 - 2930 cm-1 (–CH3, -CH2-, =CH-); (6) 338-3920 cm-1 (OH free alcohol type); (II)Comparative FT-IR spectral analysis indicate the authenticity of the finished products ( tablets) in terms of Hypericum perforatum L. content; (III)The infrared spectroscopy is an adequate technique for identification and authentication of the medicinal herbs , intermediate raw material and in the food supplements less in the form of solutions where the results are not conclusive.Keywords: Authentication, FT-IR fingerprint, Herbal supplements, Hypericum perforatum L.
Procedia PDF Downloads 37643 The Plight of the Rohingyas: Design Guidelines to Accommodate Displaced People in Bangladesh
Authors: Nazia Roushan, Maria Kipti
Abstract:
The sensitive issue of a large-scale entry of Rohingya refugees to Bangladesh has arisen again since August of 2017. Incited by ethnic and religious conflict, the Rohingyas—an ethnic group concentrated in the north-west state of Rakhine in Myanmar—have been fleeing to what is now Bangladesh from as early as the late 1700s in four main exoduses. This long-standing persecution has recently escalated, and accommodating the recent wave of exodus has been especially challenging due to the sheer volume of a million refugees concentrated in refugee camps in two small administrative units (upazilas) in the south-east of the country: the host area. This drastic change in the host area’s social fabric is putting a lot of strain on the country’s economic, demographic and environmental stability, and security. Although Bangladesh’s long-term experience with disaster management has enabled it to respond rapidly to the crisis, the government is failing to cope with this enormous problem and has taken insufficient steps towards improving the living conditions to inhibit the inflow of more refugees. On top of that, the absence of a comprehensive national refugee policy, and the density of the structures of the camps are constricting the upgrading of the shelters to international standards. As of December 2016, the combined number of internally displaced persons (IDPs) due to conflict and violence (stock), and new displacements due to disasters (flow) in Bangladesh had exceeded 1 million. These numbers have increased dramatically in the last few months. Moreover, by 2050, Bangladesh will have as much as 25 million climate refugees just from its coastal districts. To enhance the resilience of the vulnerable, it is crucial to methodically factorize further interventions between Disaster Risk Reduction for Resilience (DRR) and the concept of Building Back Better (BBB) in the rehabilitation-reconstruction period. Considering these points, this paper provides a palette of options for design guidelines related to the living spaces and infrastructures for refugees. This will encourage the development of national standards for refugee camps, and the national and local level rehabilitation-reconstruction practices. Unhygienic living conditions, vulnerability, and the general lack of control over life are pervasive throughout the camps. This paper, therefore, proposes site-specific strategic and physical planning and design for shelters for refugees in Bangladesh that will lead to sustainable living environments through the following: a) site survey of existing two registered and one makeshift unregistered refugee camps to document and study their physical conditions, b) questionnaires and semi-structured focus group discussions carried out among the refugees and stakeholders to understand what the lived experiences and needs are; and c) combining the findings with international minimum standards for shelter and settlement from International Federation of Red Cross and Red Crescent (IFRC), Médecins Sans Frontières (MSF), United Nations High Commissioner for Refugees (UNHCR). These proposals include temporary shelter solutions that balance between lived spaces and regimented, repetitive plans using readily available and cheap materials, erosion control and slope stabilization strategies, and most importantly, coping mechanisms for the refugees to be self-reliant and resilient.Keywords: architecture, Bangladesh, refugee camp, resilience, Rohingya
Procedia PDF Downloads 23742 FELIX: 40 Hz Masked Flickering Light as a Potential Treatment of Major Depressive Disorder
Authors: Nikolas Aasheim, Laura Sakalauskaitė, Julie Dubois, Malina Ploug Larsen, Paul Michael Petersen, Marcus S. Carstensen, Marcus S. Carstensen, Mai Nguyen, Line Katrine Harder Clemmensen, Kamilla Miskowiak, Klaus Martiny
Abstract:
Background: Major depressive disorder (MDD) is a debilitating condition that affects more than 300 million people worldwide and profoundly impacts well-being and health. Current treatments are based on a trial-and-error approach, and reliable biomarkers are needed for more informed and personalized treatment solutions. One potential biomarker is aberrant gamma-frequency (30-80 Hz) brainwaves, hypothesized to originate from deficiencies in the excitatory-inhibitory interaction between the pyramidal cells and interneurons. An imbalance within this interaction is described as a crucial pathological mechanism in various neuropsychiatric conditions, including MDD, and the modulation of this pathological interaction has been investigated as a potential target. A specific type of steady-state visually evoked potential (SSVEP) in the gamma frequency band, referred to as gamma entrainment using sensory stimuli (GENUS), particularly around the 40Hz spectrum, entrains large scale, fast-spiking PV+ interneurons, facilitating coordinated activity in key brain regions, reduced neuronal and synaptic loss, and enhanced synaptic stability and plasticity. GENUS has shown promise in improving sleep, offering neuroprotective effects in Alzheimer's disease (AD), and reducing pathological markers like Amyloid Beta and TAU proteins, as seen in animal models. In this study, we explore the antidepressant, cognitive, and electrophysiological effects of a novel, non-invasive brain stimulation (NIBS) approach utilizing a 40 Hz invisible spectral flicker to induce gamma activity in patients diagnosed with Major Depressive Disorder (MDD). This non-invasive targeted stimulation of lower gamma band activity (40 Hz) is designed to modulate neural circuits associated with mood and cognitive functions, providing a potential new therapeutic avenue for MDD. Methods and Design: 60 patients with a current diagnosis of a major depressive episode will be enrolled in a randomized, double-blinded, placebo-controlled trial. The active treatment group will receive 40 Hz invisible spectral flickering light stimulation while the control group will receive continuous light matched in colour temperature and brightness. Patients in both groups will get an hour of daily light treatment in their own homes and will attend four follow-up visits to assess depression severity measured by Hamilton Depression Rating Scale (HAM-D₆), several aspects of sleep, cognitive function, quality of life. Additionally, exploratory EEG is conducted to assess spectral changes throughout the protocol. The primary endpoint is the mean change from baseline to week 6 in depression severity (HAM-D₆ subset) between the groups. Current state of affairs/timeline: The FELIX study was initiated in the beginning of 2022, planning to reach stage of publication in December 2025. 21 participants have been enrolled in the protocol thus far, expecting to be finished with trials and recruitment by the end of 2024.Keywords: major depressive disorder, gamma, neurostimulation, EEG
Procedia PDF Downloads 941 Reverse Logistics Network Optimization for E-Commerce
Authors: Albert W. K. Tan
Abstract:
This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.Keywords: reverse logistics, supply chain management, optimization, e-commerce
Procedia PDF Downloads 4140 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics
Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones
Abstract:
Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.Keywords: auto rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact
Procedia PDF Downloads 19439 Policies for Circular Bioeconomy in Portugal: Barriers and Constraints
Authors: Ana Fonseca, Ana Gouveia, Edgar Ramalho, Rita Henriques, Filipa Figueiredo, João Nunes
Abstract:
Due to persistent climate pressures, there is a need to find a resilient economic system that is regenerative in nature. Bioeconomy offers the possibility of replacing non-renewable and non-biodegradable materials derived from fossil fuels with ones that are renewable and biodegradable, while a Circular Economy aims at sustainable and resource-efficient operations. The term "Circular Bioeconomy", which can be summarized as all activities that transform biomass for its use in various product streams, expresses the interaction between these two ideas. Portugal has a very favourable context to promote a Circular Bioeconomy due to its variety of climates and ecosystems, availability of biologically based resources, location, and geomorphology. Recently, there have been political and legislative efforts to develop the Portuguese Circular Bioeconomy. The Action Plan for a Sustainable Bioeconomy, approved in 2021, is composed of five axes of intervention, ranging from sustainable production and the use of regionally based biological resources to the development of a circular and sustainable bioindustry through research and innovation. However, as some statistics show, Portugal is still far from achieving circularity. According to Eurostat, Portugal has circularity rates of 2.8%, which is the second lowest among the member states of the European Union. Some challenges contribute to this scenario, including sectorial heterogeneity and fragmentation, prevalence of small producers, lack of attractiveness for younger generations, and absence of implementation of collaborative solutions amongst producers and along value chains.Regarding the Portuguese industrial sector, there is a tendency towards complex bureaucratic processes, which leads to economic and financial obstacles and an unclear national strategy. Together with the limited number of incentives the country has to offer to those that pretend to abandon the linear economic model, many entrepreneurs are hesitant to invest the capital needed to make their companies more circular. Absence of disaggregated, georeferenced, and reliable information regarding the actual availability of biological resources is also a major issue. Low literacy on bioeconomy among many of the sectoral agents and in society in general directly impacts the decisions of production and final consumption. The WinBio project seeks to outline a strategic approach for the management of weaknesses/opportunities in the technology transfer process, given the reality of the territory, through road mapping and national and international benchmarking. The developed work included the identification and analysis of agents in the interior region of Portugal, natural endogenous resources, products, and processes associated with potential development. Specific flow of biological wastes, possible value chains, and the potential for replacing critical raw materials with bio-based products was accessed, taking into consideration other countries with a matured bioeconomy. The study found food industry, agriculture, forestry, and fisheries generate huge amounts of waste streams, which in turn provide an opportunity for the establishment of local bio-industries powered by this biomass. The project identified biological resources with potential for replication and applicability in the Portuguese context. The richness of natural resources and potentials known in the interior region of Portugal is a major key to developing the Circular Economy and sustainability of the country.Keywords: circular bioeconomy, interior region of portugal, regional development., public policy
Procedia PDF Downloads 9438 Trajectory Optimization for Autonomous Deep Space Missions
Authors: Anne Schattel, Mitja Echim, Christof Büskens
Abstract:
Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.
Procedia PDF Downloads 41237 Biocellulose as Platform for the Development of Multifunctional Materials
Authors: Junkal Gutierrez, Hernane S. Barud, Sidney J. L. Ribeiro, Agnieszka Tercjak
Abstract:
Nowadays the interest on green nanocomposites and on the development of more environmental friendly products has been increased. Bacterial cellulose has been recently investigated as an attractive environmentally friendly material for the preparation of low-cost nanocomposites. The formation of cellulose by laboratory bacterial cultures is an interesting and attractive biomimetic access to obtain pure cellulose with excellent properties. Additionally, properties as molar mass, molar mass distribution, and the supramolecular structure could be control using different bacterial strain, culture mediums and conditions, including the incorporation of different additives. This kind of cellulose is a natural nanomaterial, and therefore, it has a high surface-to-volume ratio which is highly advantageous in composites production. Such property combined with good biocompatibility, high tensile strength, and high crystallinity makes bacterial cellulose a potential material for applications in different fields. The aim of this investigation work was the fabrication of novel hybrid inorganic-organic composites based on bacterial cellulose, cultivated in our laboratory, as a template. This kind of biohybrid nanocomposites gathers together excellent properties of bacterial cellulose with the ones displayed by typical inorganic nanoparticles like optical, magnetic and electrical properties, luminescence, ionic conductivity and selectivity, as well as chemical or biochemical activity. In addition, the functionalization of cellulose with inorganic materials opens new pathways for the fabrication of novel multifunctional hybrid materials with promising properties for a wide range of applications namely electronic paper, flexible displays, solar cells, sensors, among others. In this work, different pathways for fabrication of multifunctional biohybrid nanopapers with tunable properties based on BC modified with amphiphilic poly(ethylene oxide-b-propylene oxide-b-ethylene oxide) (EPE) block copolymer, sol-gel synthesized nanoparticles (titanium, vanadium and a mixture of both oxides) and functionalized iron oxide nanoparticles will be presented. In situ (biosynthesized) and ex situ (at post-production level) approaches were successfully used to modify BC membranes. Bacterial cellulose based biocomposites modified with different EPE block copolymer contents were developed by in situ technique. Thus, BC growth conditions were manipulated to fabricate EPE/BC nanocomposite during the biosynthesis. Additionally, hybrid inorganic/organic nanocomposites based on BC membranes and inorganic nanoparticles were designed via ex-situ method, by immersion of never-dried BC membranes into different nanoparticle solutions. On the one hand, sol-gel synthesized nanoparticles (titanium, vanadium and a mixture of both oxides) and on the other hand superparamagnetic iron oxide nanoparticles (SPION), Fe2O3-PEO solution. The morphology of designed novel bionanocomposites hybrid materials was investigated by atomic force microscopy (AFM) and scanning electron microscopy (SEM). In order to characterized obtained materials from the point of view of future applications different techniques were employed. On the one hand, optical properties were analyzed by UV-vis spectroscopy and spectrofluorimetry and on the other hand electrical properties were studied at nano and macroscale using electric force microscopy (EFM), tunneling atomic force microscopy (TUNA) and Keithley semiconductor analyzer, respectively. Magnetic properties were measured by means of magnetic force microscopy (MFM). Additionally, mechanical properties were also analyzed.Keywords: bacterial cellulose, block copolymer, advanced characterization techniques, nanoparticles
Procedia PDF Downloads 23036 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology
Authors: Ashley L. Freeman, Jessica D. Watkins
Abstract:
TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine
Procedia PDF Downloads 8435 Utilization of Functionalized Biochar from Water Hyacinth (Eichhornia crassipes) as Green Nano-Fertilizers
Authors: Adewale Tolulope Irewale, Elias Emeka Elemike, Christian O. Dimkpa, Emeka Emmanuel Oguzie
Abstract:
As the global population steadily approaches the 10billion mark, the world is currently faced with two major challenges among others – accessing sustainable and clean energy, and food security. Accessing cleaner and sustainable energy sources to drive global economy and technological advancement, and feeding the teeming human population require sustainable, innovative, and smart solutions. To solve the food production problem, producers have relied on fertilizers as a way of improving crop productivity. Commercial inorganic fertilizers, which is employed to boost agricultural food production, however, pose significant ecological sustainability and economic problems including soil and water pollution, reduced input efficiency, development of highly resistant weeds, micronutrient deficiency, soil degradation, and increased soil toxicity. These ecological and sustainability concerns have raised uncertainties about the continued effectiveness of conventional fertilizers. With the application of nanotechnology, plant biomass upcycling offers several advantages in greener energy production and sustainable agriculture through reduction of environmental pollution, increasing soil microbial activity, recycling carbon thereby reducing GHG emission, and so forth. This innovative technology has the potential for a circular economy and creating a sustainable agricultural practice. Nanomaterials have the potential to greatly enhance the quality and nutrient composition of organic biomass which in turn, allows for the conversion of biomass into nanofertilizers that are potentially more efficient. Water hyacinth plant harvested from an inland water at Warri, Delta State Nigeria were air-dried and milled into powder form. The dry biomass were used to prepare biochar at a pre-determined temperature in an oxygen deficient atmosphere. Physicochemical analysis of the resulting biochar was carried out to determine its porosity and general morphology using the Scanning Transmission Electron Microscopy (STEM). The functional groups (-COOH, -OH, -NH2, -CN, -C=O) were assessed using the Fourier Transform InfraRed Spectroscopy (FTIR) while the heavy metals (Cr, Cu, Fe, Pb, Mg, Mn) were analyzed using Inductively Coupled Plasma – Optical Emission Spectrometry (ICP-OES). Impregnation of the biochar with nanonutrients were achieved under varied conditions of pH, temperature, nanonutrient concentrations and resident time to achieve optimum adsorption. Adsorption and desorption studies were carried out on the resulting nanofertilizer to determine kinetics for the potential nutrients’ bio-availability to plants when used as green fertilizers. Water hyacinth (Eichhornia crassipes) which is an aggressively invasive aquatic plant known for its rapid growth and profusion is being examined in this research to harness its biomass as a sustainable feedstock to formulate functionalized nano-biochar fertilizers, offering various benefits including water hyacinth biomass upcycling, improved nutrient delivery to crops and aquatic ecosystem remediation. Altogether, this work aims to create output values in the three dimensions of environmental, economic, and social benefits.Keywords: biochar-based nanofertilizers, eichhornia crassipes, greener agriculture, sustainable ecosystem, water hyacinth
Procedia PDF Downloads 6534 Optimal Pressure Control and Burst Detection for Sustainable Water Management
Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana
Abstract:
Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring
Procedia PDF Downloads 8833 A Self-Heating Gas Sensor of SnO2-Based Nanoparticles Electrophoretic Deposited
Authors: Glauco M. M. M. Lustosa, João Paulo C. Costa, Sonia M. Zanetti, Mario Cilense, Leinig Antônio Perazolli, Maria Aparecida Zaghete
Abstract:
The contamination of the environment has been one of the biggest problems of our time, mostly due to developments of many industries. SnO2 is an n-type semiconductor with band gap about 3.5 eV and has its electrical conductivity dependent of type and amount of modifiers agents added into matrix ceramic during synthesis process, allowing applications as sensing of gaseous pollutants on ambient. The chemical synthesis by polymeric precursor method consists in a complexation reaction between tin ion and citric acid at 90 °C/2 hours and subsequently addition of ethyleneglycol for polymerization at 130 °C/2 hours. It also prepared polymeric resin of zinc, cobalt and niobium ions. Stoichiometric amounts of the solutions were mixed to obtain the systems (Zn, Nb)-SnO2 and (Co, Nb) SnO2 . The metal immobilization reduces its segregation during the calcination resulting in a crystalline oxide with high chemical homogeneity. The resin was pre-calcined at 300 °C/1 hour, milled in Atritor Mill at 500 rpm/1 hour, and then calcined at 600 °C/2 hours. X-Ray Diffraction (XDR) indicated formation of SnO2 -rutile phase (JCPDS card nº 41-1445). The characterization by Scanning Electron Microscope of High Resolution showed spherical ceramic powder nanostructured with 10-20 nm of diameter. 20 mg of SnO2 -based powder was kept in 20 ml of isopropyl alcohol and then taken to an electrophoretic deposition (EPD) system. The EPD method allows control the thickness films through the voltage or current applied in the electrophoretic cell and by the time used for deposition of ceramics particles. This procedure obtains films in a short time with low costs, bringing prospects for a new generation of smaller size devices with easy integration technology. In this research, films were obtained in an alumina substrate with interdigital electrodes after applying 2 kV during 5 and 10 minutes in cells containing alcoholic suspension of (Zn, Nb)-SnO2 and (Co, Nb) SnO2 of powders, forming a sensing layer. The substrate has designed integrated micro hotplates that provide an instantaneous and precise temperature control capability when a voltage is applied. The films were sintered at 900 and 1000 °C in a microwave oven of 770 W, adapted by the research group itself with a temperature controller. This sintering is a fast process with homogeneous heating rate which promotes controlled growth of grain size and also the diffusion of modifiers agents, inducing the creation of intrinsic defects which will change the electrical characteristics of SnO2 -based powders. This study has successfully demonstrated a microfabricated system with an integrated micro-hotplate for detection of CO and NO2 gas at different concentrations and temperature, with self-heating SnO2 - based nanoparticles films, being suitable for both industrial process monitoring and detection of low concentrations in buildings/residences in order to safeguard human health. The results indicate the possibility for development of gas sensors devices with low power consumption for integration in portable electronic equipment with fast analysis. Acknowledgments The authors thanks to the LMA-IQ for providing the FEG-SEM images, and the financial support of this project by the Brazilian research funding agencies CNPq, FAPESP 2014/11314-9 and CEPID/CDMF- FAPESP 2013/07296-2.Keywords: chemical synthesis, electrophoretic deposition, self-heating, gas sensor
Procedia PDF Downloads 27632 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 3631 Oxidation Behavior of Ferritic Stainless Steel Interconnects Modified Using Nanoparticles of Rare-Earth Elements under Operating Conditions Specific to Solid Oxide Electrolyzer Cells
Authors: Łukasz Mazur, Kamil Domaradzki, Bartosz Kamecki, Justyna Ignaczak, Sebastian Molin, Aleksander Gil, Tomasz Brylewski
Abstract:
The rising global power consumption necessitates the development of new energy storage solutions. Prospective technologies include solid oxide electrolyzer cells (SOECs), which convert surplus electrical energy into hydrogen. An electrolyzer cell consists of a porous anode, and cathode, and a dense electrolyte. Power output is increased by connecting cells into stacks using interconnects. Interconnects are currently made from high-chromium ferritic steels – for example, Crofer 22 APU – which exhibit high oxidation resistance and a thermal expansion coefficient that is similar to that of electrode materials. These materials have one disadvantage – their area-specific resistance (ASR) gradually increases due to the formation of a Cr₂O₃ scale on their surface as a result of oxidation. The chromia in the scale also reacts with the water vapor present in the reaction media, forming volatile chromium oxyhydroxides, which in turn react with electrode materials and cause their deterioration. The electrochemical efficiency of SOECs thus decreases. To mitigate this, the interconnect surface can be modified with protective-conducting coatings of spinel or other materials. The high prices of SOEC components -especially the Crofer 22 APU- have prevented their widespread adoption. More inexpensive counterparts, therefore, need to be found, and their properties need to be enhanced to make them viable. Candidates include the Nirosta 4016/1,4016 low-chromium ferritic steel with a chromium content of just 16.3 wt%. This steel's resistance to high-temperature oxidation was improved by depositing Gd₂O₃ nanoparticles on its surface via either dip coating or electrolysis. Modification with CeO₂ or Ce₀.₉Y₀.₁O₂ nanoparticles deposited by means of spray pyrolysis was also tested. These methods were selected because of their low cost and simplicity of application. The aim of this study was to investigate the oxidation kinetics of Nirosta 4016/1,4016 modified using the afore-mentioned methods and to subsequently measure the obtained samples' ASR. The samples were oxidized for 100 h in the air as well as air/H₂O and Ar/H₂/H₂O mixtures at 1073 K. Such conditions reflect those found in the anode and cathode operating space during real-life use of SOECs. Phase and chemical composition and the microstructure of oxidation products were determined using XRD and SEM-EDS. ASR was measured over the range of 623-1073 K using a four-point, two-probe DC technique. The results indicate that the applied nanoparticles improve the oxidation resistance and electrical properties of the studied layered systems. The properties of individual systems varied significantly depending on the applied reaction medium. Gd₂O₃ nanoparticles improved oxidation resistance to a greater degree than either CeO₂ or Ce₀.₉Y₀.₁O₂ nanoparticles. On the other hand, the cerium-containing nanoparticles improved electrical properties regardless of the reaction medium. The ASR values of all surface-modified steel samples were below the 0.1 Ω.cm² threshold set for interconnect materials, which was exceeded in the case of the unmodified reference sample. It can be concluded that the applied modifications increased the oxidation resistance of Nirosta 4016/1.4016 to a level that allows its use as SOEC interconnect material. Acknowledgments: Funding of Research project supported by program "Excellence initiative – research university" for the AGH University of Krakow" is gratefully acknowledged (TB).Keywords: cerium oxide, ferritic stainless steel, gadolinium oxide, interconnect, SOEC
Procedia PDF Downloads 8730 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 6829 Low-Cost Aviation Solutions to Strengthen Counter-Poaching Efforts in Kenya
Authors: Kuldeep Rawat, Michael O'Shea, Maureen McGough
Abstract:
The paper will discuss a National Institute of Justice (NIJ) funded project to provide cost-effective aviation technologies and research to support counter-poaching operations related to endangered, protected, and/or regulated wildlife. The goal of this project is to provide cost-effective aviation technology and research support to Kenya Wildlife Service (KWS) in their counter-poaching efforts. In pursuit of this goal, Elizabeth City State University (ECSU) is assisting the National Institute of Justice (NIJ) in enhancing the Kenya Wildlife Service’s aviation technology and related capacity to meet its counter-poaching mission. Poaching, at its core, is systemic as poachers go to the most extreme lengths to kill high target species such as elephant and rhino. These high target wildlife species live in underdeveloped or impoverished nations, where poachers find fewer barriers to their operations. In Kenya, with fifty-nine (59) parks and reserves, spread over an area of 225,830 square miles (584,897 square kilometers) adequate surveillance on the ground is next to impossible. Cost-effective aviation surveillance technologies, based on a comprehensive needs assessment and operational evaluation, are needed to curb poaching and effectively prevent wildlife trafficking. As one of the premier law enforcement Air Wings in East Africa, KWS plays a crucial role in Kenya, not only in counter-poaching and wildlife conservation efforts, but in aerial surveillance, counterterrorism and national security efforts as well. While the Air Wing has done, a remarkable job conducting aerial patrols with limited resources, additional aircraft and upgraded technology should significantly advance the Air Wing’s ability to achieve its wildlife protection mission. The project includes: (i) Needs Assessment of the KWS Air Wing, to include the identification of resources, current and prospective capacity, operational challenges and priority goals for expansion, (ii) Acquisition of Low-Cost Aviation Technology to meet priority needs, and (iii) Operational Evaluation of technology performance, with a focus on implementation and effectiveness. The Needs Assessment reflects the priorities identified through two site visits to the KWS Air Wing in Nairobi, Kenya, as well as field visits to multiple national parks receiving aerial support and interviewing/surveying KWS Air wing pilots and leadership. Needs Assessment identified some immediate technology needs that includes, GPS with upgrades, including weather application, Night flying capabilities, to include runway lights and night vision technology, Cameras and surveillance equipment, Flight tracking system and/or Emergency Position Indicating Radio Beacon, Lightweight ballistic-resistant body armor, and medical equipment, to include a customized stretcher and standard medical evacuation equipment. Results of this assessment, along with significant input from the KWS Air Wing, will guide the second phase of this project: technology acquisition. Acquired technology will then be evaluated in the field, with a focus on implementation and effectiveness. Results will ultimately be translated for any rural or tribal law enforcement agencies with comparable aerial surveillance missions and operational environments, and jurisdictional challenges, seeking to implement low-cost aviation technology. Results from Needs Assessment phase, including survey results and our ongoing technology acquisition and baseline operational evaluation will be discussed in the paper.Keywords: aerial surveillance mission, aviation technology, counter-poaching, wildlife protection
Procedia PDF Downloads 27628 Deepfake Detection System through Collective Intelligence in Public Blockchain Environment
Authors: Mustafa Zemin
Abstract:
The increasing popularity of deepfake technology poses a growing threat to information integrity and security. This paper presents a deepfake detection system designed to leverage public blockchain and collective intelligence as solutions to address this issue. Utilizing smart contracts on the Ethereum blockchain ensures secure, decentralized media content verification, creating an auditable and tamper-resistant framework. The approach integrates concepts from electronic voting, allowing a network of participants to assess content authenticity collectively through consensus mechanisms. This decentralized, community-driven model enhances detection accuracy while preventing single points of failure. Experimental analysis demonstrates the system’s robustness, reliability, and scalability in deepfake detection, offering a sustainable approach to combat digital misinformation. The proposed solution advances deepfake detection capabilities and provides a framework for applying blockchain-based collective intelligence to other domains facing similar verification challenges, thereby contributing to the fight against digital misinformation in a secure, trustless environment. The limitations and challenges identified in this work can be addressed by enhancing user participation, particularly through more informed and conscious engagement. One potential avenue is to involve users in developing deep learning models, which could contribute to the voting system. However, for such participation to be incentivized, a reward mechanism must be implemented. A viable approach to this is through a credibility-based reward system, where users who actively participate in voting are compensated with tokens. This system would serve not only as a motivational factor but also as a mechanism for ensuring higher-quality participation over time. Each participant is assigned a Credibility Score, which is dynamically adjusted based on the accuracy of their votes. The credibility score increases when their decisions align with the majority consensus and decreases when their votes are incorrect. This incentivizes accurate decision-making and ensures that more reliable participants gain influence in the system. The credibility scores are designed to increase progressively for users with more correct votes. In contrast, penalties for incorrect voting are more severe than the rewards for correct decisions, emphasizing the importance of voting accuracy. As users’ Credibility Scores increase over time, successful voters will be less reliant on lower-scoring participants, thereby fostering an environment where high-quality contributions are valued. Furthermore, tokenization plays a critical role in enhancing the decentralization of the system. Users can participate without uploading videos, by receiving tokens through an airdrop mechanism once they surpass a predefined credibility threshold. This process effectively decentralizes decision-making and incentivizes participation from a broader user base. The integration of tokenization would allow users to interact with the smart contract in a more seamless manner, replacing the use of test tokens with the system’s own tokens. Voters with high credibility scores would be rewarded with tokens. The distribution model is designed to reflect the gradual increase in token value over time, similar to the evolution of Bitcoin's reward system, where early participants earn higher rewards, but as the system matures, the token value appreciates, and rewards decrease.Keywords: deepfake detection, public blockchain, electronic voting, collective intelligence, Ethereum
Procedia PDF Downloads 7