Search results for: adaptive soc estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2880

Search results for: adaptive soc estimation

30 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 137
29 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 60
28 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 179
27 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 148
26 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation

Authors: A. Zapata, Orlando Arroyo, R. Bonett

Abstract:

Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.

Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm

Procedia PDF Downloads 75
25 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem

Authors: Rupak Kumar Sarma, Ratul Saikia

Abstract:

Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.

Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification

Procedia PDF Downloads 174
24 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
23 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 231
22 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 195
21 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control

Authors: Marco Frieslaar, Bing Chu, Eric Rogers

Abstract:

Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.

Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation

Procedia PDF Downloads 264
20 Sensitivity and Specificity of Some Serological Tests Used for Diagnosis of Bovine Brucellosis in Egypt on Bacteriological and Molecular Basis

Authors: Hosein I. Hosein, Ragab Azzam, Ahmed M. S. Menshawy, Sherin Rouby, Khaled Hendy, Ayman Mahrous, Hany Hussien

Abstract:

Brucellosis is a highly contagious bacterial zoonotic disease of a worldwide spread and has different names; Infectious or enzootic abortion and Bang's disease in animals; and Mediterranean or Malta fever, Undulant Fever and Rock fever in humans. It is caused by the different species of genus Brucella which is a Gram-negative, aerobic, non-spore forming, facultative intracellular bacterium. Brucella affects a wide range of mammals including bovines, small ruminants, pigs, equines, rodents, marine mammals as well as human resulting in serious economic losses in animal populations. In human, Brucella causes a severe illness representing a great public health problem. The disease was reported in Egypt for the first time in 1939; since then the disease remained endemic at high levels among cattle, buffalo, sheep and goat and is still representing a public health hazard. The annual economic losses due to brucellosis were estimated to be about 60 million Egyptian pounds yearly, but actual estimates are still missing despite almost 30 years of implementation of the Egyptian control programme. Despite being the gold standard, bacterial isolation has been reported to show poor sensitivity for samples with low-level of Brucella and is impractical for regular screening of large populations. Thus, serological tests still remain the corner stone for routine diagnosis of brucellosis, especially in developing countries. In the present study, a total of 1533 cows (256 from Beni-Suef Governorate, 445 from Al-Fayoum Governorate and 832 from Damietta Governorate), were employed for estimation of relative sensitivity, relative specificity, positive predictive value and negative predictive value of buffered acidified plate antigen test (BPAT), rose bengal test (RBT) and complement fixation test (CFT). The overall seroprevalence of brucellosis revealed (19.63%). Relative sensitivity, relative specificity, positive predictive value and negative predictive value of BPAT,RBT and CFT were estimated as, (96.27 %, 96.76 %, 87.65 % and 99.10 %), (93.42 %, 96.27 %, 90.16 % and 98.35%) and (89.30 %, 98.60 %, 94.35 %and 97.24 %) respectively. BPAT showed the highest sensitivity among the three employed serological tests. RBT was less specific than BPAT. CFT showed the least sensitivity 89.30 % among the three employed serological tests but showed the highest specificity. Different tissues specimens of 22 seropositive cows (spleen, retropharyngeal udder, and supra-mammary lymph nodes) were subjected for bacteriological studies for isolation and identification of Brucella organisms. Brucella melitensis biovar 3 could be recovered from 12 (54.55%) cows. Bacteriological examinations failed to classify 10 cases (45.45%) and were culture negative. Bruce-ladder PCR was carried out for molecular identification of the 12 Brucella isolates at the species level. Three fragments of 587 bp, 1071 bp and 1682 bp sizes were amplified indicating Brucella melitensis. The results indicated the importance of using several procedures to overcome the problem of escaping of some infected animals from diagnosis.Bruce-ladder PCR is an important tool for diagnosis and epidemiologic studies, providing relevant information for identification of Brucella spp.

Keywords: brucellosis, relative sensitivity, relative specificity, Bruce-ladder, Egypt

Procedia PDF Downloads 355
19 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 184
18 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 342
17 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data

Authors: Linna Li

Abstract:

The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.

Keywords: geovisualization, human mobility pattern, Los Angeles, social media

Procedia PDF Downloads 119
16 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 287
15 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 63
14 Aquaporin-1 as a Differential Marker in Toxicant-Induced Lung Injury

Authors: Ekta Yadav, Sukanta Bhattacharya, Brijesh Yadav, Ariel Hus, Jagjit Yadav

Abstract:

Background and Significance: Respiratory exposure to toxicants (chemicals or particulates) causes disruption of lung homeostasis leading to lung toxicity/injury manifested as pulmonary inflammation, edema, and/or other effects depending on the type and extent of exposure. This emphasizes the need for investigating toxicant type-specific mechanisms to understand therapeutic targets. Aquaporins, aka water channels, are known to play a role in lung homeostasis. Particularly, the two major lung aquaporins AQP5 and AQP1 expressed in alveolar epithelial and vasculature endothelia respectively allow for movement of the fluid between the alveolar air space and the associated vasculature. In view of this, the current study is focused on understanding the regulation of lung aquaporins and other targets during inhalation exposure to toxic chemicals (Cigarette smoke chemicals) versus toxic particles (Carbon nanoparticles) or co-exposures to understand their relevance as markers of injury and intervention. Methodologies: C57BL/6 mice (5-7 weeks old) were used in this study following an approved protocol by the University of Cincinnati Institutional Animal Care and Use Committee (IACUC). The mice were exposed via oropharyngeal aspiration to multiwall carbon nanotube (MWCNT) particles suspension once (33 ugs/mouse) followed by housing for four weeks or to Cigarette smoke Extract (CSE) using a daily dose of 30µl/mouse for four weeks, or to co-exposure using the combined regime. Control groups received vehicles following the same dosing schedule. Lung toxicity/injury was assessed in terms of homeostasis changes in the lung tissue and lumen. Exposed lungs were analyzed for transcriptional expression of specific targets (AQPs, surfactant protein A, Mucin 5b) in relation to tissue homeostasis. Total RNA from lungs extracted using TRIreagent kit was analyzed using qRT-PCR based on gene-specific primers. Total protein in bronchoalveolar lavage (BAL) fluid was determined by the DC protein estimation kit (BioRad). GraphPad Prism 5.0 (La Jolla, CA, USA) was used for all analyses. Major findings: CNT exposure alone or as co-exposure with CSE increased the total protein content in the BAL fluid (lung lumen rinse), implying compromised membrane integrity and cellular infiltration in the lung alveoli. In contrast, CSE showed no significant effect. AQP1, required for water transport across membranes of endothelial cells in lungs, was significantly upregulated in CNT exposure but downregulated in CSE exposure and showed an intermediate level of expression for the co-exposure group. Both CNT and CSE exposures had significant downregulating effects on Muc5b, and SP-A expression and the co-exposure showed either no significant effect (Muc5b) or significant downregulating effect (SP-A), suggesting an increased propensity for infection in the exposed lungs. Conclusions: The current study based on the lung toxicity mouse model showed that both toxicant types, particles (CNT) versus chemicals (CSE), cause similar downregulation of lung innate defense targets (SP-A, Muc5b) and mostly a summative effect when presented as co-exposure. However, the two toxicant types show differential induction of aquaporin-1 coinciding with the corresponding differential damage to alveolar integrity (vascular permeability). Interestingly, this implies the potential of AQP1 as a differential marker of toxicant type-specific lung injury.

Keywords: aquaporin, gene expression, lung injury, toxicant exposure

Procedia PDF Downloads 184
13 Navigating the Nexus of HIV/AIDS Care: Leveraging Statistical Insight to Transform Clinical Practice and Patient Outcomes

Authors: Nahashon Mwirigi

Abstract:

The management of HIV/AIDS is a global challenge, demanding precise tools to predict disease progression and guide tailored treatment. CD4 cell count dynamics, a crucial immune function indicator, play an essential role in understanding HIV/AIDS progression and enhancing patient care through effective modeling. While several models assess disease progression, existing methods often fall short in capturing the complex, non-linear nature of HIV/AIDS, especially across diverse demographics. A need exists for models that balance predictive accuracy with clinical applicability, enabling individualized care strategies based on patient-specific progression rates. This study utilizes patient data from Kenyatta National Hospital (2003–2014) to model HIV/AIDS progression across six CD4-defined states. The Exponential, 2-Parameter Weibull, and 3-Parameter Weibull models are employed to analyze failure rates and explore progression patterns by age and gender. Model selection is based on Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to identify models best representing disease progression variability across demographic groups. The 3-Parameter Weibull model emerges as the most effective, accurately capturing HIV/AIDS progression dynamics, particularly by incorporating delayed progression effects. This model reflects age and gender-specific variations, offering refined insights into patient trajectories and facilitating targeted interventions. One key finding is that older patients progress more slowly through CD4-defined stages, with a delayed onset of advanced stages. This suggests that older patients may benefit from extended monitoring intervals, allowing providers to optimize resources while maintaining consistent care. Recognizing slower progression in this demographic helps clinicians reduce unnecessary interventions, prioritizing care for faster-progressing groups. Gender-based analysis reveals that female patients exhibit more consistent progression, while male patients show greater variability. This highlights the need for gender-specific treatment approaches, as men may require more frequent assessments and adaptive treatment plans to address their variable progression. Tailoring treatment by gender can improve outcomes by addressing distinct risk patterns in each group. The model’s ability to account for both accelerated and delayed progression equips clinicians with a robust tool for estimating the duration of each disease stage. This supports individualized treatment planning, allowing clinicians to optimize antiretroviral therapy (ART) regimens based on demographic factors and expected disease trajectories. Aligning ART timing with specific progression patterns can enhance treatment efficacy and adherence. The model also has significant implications for healthcare systems, as its predictive accuracy enables proactive patient management, reducing the frequency of advanced-stage complications. For resource limited providers, this capability facilitates strategic intervention timing, ensuring that high-risk patients receive timely care while resources are allocated efficiently. Anticipating progression stages enhances both patient care and resource management, reinforcing the model’s value in supporting sustainable HIV/AIDS healthcare strategies. This study underscores the importance of models that capture the complexities of HIV/AIDS progression, offering insights to guide personalized, data-informed care. The 3-Parameter Weibull model’s ability to accurately reflect delayed progression and demographic risk variations presents a valuable tool for clinicians, supporting the development of targeted interventions and resource optimization in HIV/AIDS management.

Keywords: HIV/AIDS progression, 3-parameter Weibull model, CD4 cell count stages, antiretroviral therapy, demographic-specific modeling

Procedia PDF Downloads 10
12 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale

Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.

Abstract:

Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).

Keywords: quartz-sericite, kaolinite, mullite, thermal processing

Procedia PDF Downloads 415
11 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 224
10 CLOUD Japan: Prospective Multi-Hospital Study to Determine the Population-Based Incidence of Hospitalized Clostridium difficile Infections

Authors: Kazuhiro Tateda, Elisa Gonzalez, Shuhei Ito, Kirstin Heinrich, Kevin Sweetland, Pingping Zhang, Catia Ferreira, Michael Pride, Jennifer Moisi, Sharon Gray, Bennett Lee, Fred Angulo

Abstract:

Clostridium difficile (C. difficile) is the most common cause of antibiotic-associated diarrhea and infectious diarrhea in healthcare settings. Japan has an aging population; the elderly are at increased risk of hospitalization, antibiotic use, and C. difficile infection (CDI). Little is known about the population-based incidence and disease burden of CDI in Japan although limited hospital-based studies have reported a lower incidence than the United States. To understand CDI disease burden in Japan, CLOUD (Clostridium difficile Infection Burden of Disease in Adults in Japan) was developed. CLOUD will derive population-based incidence estimates of the number of CDI cases per 100,000 population per year in Ota-ku (population 723,341), one of the districts in Tokyo, Japan. CLOUD will include approximately 14 of the 28 Ota-ku hospitals including Toho University Hospital, which is a 1,000 bed tertiary care teaching hospital. During the 12-month patient enrollment period, which is scheduled to begin in November 2018, Ota-ku residents > 50 years of age who are hospitalized at a participating hospital with diarrhea ( > 3 unformed stools (Bristol Stool Chart 5-7) in 24 hours) will be actively ascertained, consented, and enrolled by study surveillance staff. A stool specimen will be collected from enrolled patients and tested at a local reference laboratory (LSI Medience, Tokyo) using QUIK CHEK COMPLETE® (Abbott Laboratories). which simultaneously tests specimens for the presence of glutamate dehydrogenase (GDH) and C. difficile toxins A and B. A frozen stool specimen will also be sent to the Pfizer Laboratory (Pearl River, United States) for analysis using a two-step diagnostic testing algorithm that is based on detection of C. difficile strains/spores harboring toxin B gene by PCR followed by detection of free toxins (A and B) using a proprietary cell cytotoxicity neutralization assay (CCNA) developed by Pfizer. Positive specimens will be anaerobically cultured, and C. difficile isolates will be characterized by ribotyping and whole genomic sequencing. CDI patients enrolled in CLOUD will be contacted weekly for 90 days following diarrhea onset to describe clinical outcomes including recurrence, reinfection, and mortality, and patient reported economic, clinical and humanistic outcomes (e.g., health-related quality of life, worsening of comorbidities, and patient and caregiver work absenteeism). Studies will also be undertaken to fully characterize the catchment area to enable population-based estimates. The 12-month active ascertainment of CDI cases among hospitalized Ota-ku residents with diarrhea in CLOUD, and the characterization of the Ota-ku catchment area, including estimation of the proportion of all hospitalizations of Ota-ku residents that occur in the CLOUD-participating hospitals, will yield CDI population-based incidence estimates, which can be stratified by age groups, risk groups, and source (hospital-acquired or community-acquired). These incidence estimates will be extrapolated, following age standardization using national census data, to yield CDI disease burden estimates for Japan. CLOUD also serves as a model for studies in other countries that can use the CLOUD protocol to estimate CDI disease burden.

Keywords: Clostridium difficile, disease burden, epidemiology, study protocol

Procedia PDF Downloads 261
9 Determination of Aquifer Geometry Using Geophysical Methods: A Case Study from Sidi Bouzid Basin, Central Tunisia

Authors: Dhekra Khazri, Hakim Gabtni

Abstract:

Because of Sidi Bouzid water table overexploitation, this study aims at integrating geophysical methods to determinate aquifers geometry assessing their geological situation and geophysical characteristics. However in highly tectonic zones controlled by Atlassic structural features with NE-SW major directions (central Tunisia), Bouguer gravimetric responses of some areas can be as much dominated by the regional structural tendency, as being non-identified or either defectively interpreted such as the case of Sidi Bouzid basin. This issue required a residual gravity anomaly elaboration isolating the Sidi Bouzid basin gravity response ranging between -8 and -14 mGal and crucial for its aquifers geometry characterization. Several gravity techniques helped constructing the Sidi Bouzid basin's residual gravity anomaly, such as Upwards continuation compared to polynomial regression trends and power spectrum analysis detecting deep basement sources at (3km), intermediate (2km) and shallow sources (1km). A 3D Euler Deconvolution was also performed detecting deepest accidents trending NE-SW, N-S and E-W with depth values reaching 5500 m and delineating the main outcropping structures of the study area. Further gravity treatments highlighted the subsurface geometry and structural features of Sidi Bouzid basin over Horizontal and vertical gradient, and also filters based on them such as Tilt angle and Source Edge detector locating rooted edges or peaks from potential field data detecting a new E-W lineament compartmentalizing the Sidi Bouzid gutter into two unequally residual anomaly and subsiding domains. This subsurface morphology is also detected by the used 2D seismic reflection sections defining the Sidi Bouzid basin as a deep gutter within a tectonic set of negative flower structures, and collapsed and tilted blocks. Furthermore, these structural features were confirmed by forward gravity modeling process over several modeled residual gravity profiles crossing the main area. Sidi Bouzid basin (central Tunisia) is also of a big interest cause of the unknown total thickness and the undefined substratum of its siliciclastic Tertiary package, and its aquifers unbounded structural subsurface features and deep accidents. The Combination of geological, hydrogeological and geophysical methods is then of an ultimate need. Therefore, a geophysical methods integration based on gravity survey supporting available seismic data through forward gravity modeling, enhanced lateral and vertical extent definition of the basin's complex sedimentary fill via 3D gravity models, improved depth estimation by a depth to basement modeling approach, and provided 3D isochronous seismic mapping visualization of the basin's Tertiary complex refining its geostructural schema. A subsurface basin geomorphology mapping, over an ultimate matching between the basin's residual gravity map and the calculated theoretical signature map, was also displayed over the modeled residual gravity profiles. An ultimate multidisciplinary geophysical study of the Sidi Bouzid basin aquifers can be accomplished via an aeromagnetic survey and a 4D Microgravity reservoir monitoring offering temporal tracking of the target aquifer's subsurface fluid dynamics enhancing and rationalizing future groundwater exploitation in this arid area of central Tunisia.

Keywords: aquifer geometry, geophysics, 3D gravity modeling, improved depths, source edge detector

Procedia PDF Downloads 284
8 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 154
7 A Multi-Scale Approach to Space Use: Habitat Disturbance Alters Behavior, Movement and Energy Budgets in Sloths (Bradypus variegatus)

Authors: Heather E. Ewart, Keith Jensen, Rebecca N. Cliffe

Abstract:

Fragmentation and changes in the structural composition of tropical forests – as a result of intensifying anthropogenic disturbance – are increasing pressures on local biodiversity. Species with low dispersal abilities have some of the highest extinction risks in response to environmental change, as even small-scale environmental variation can substantially impact their space use and energetic balance. Understanding the implications of forest disturbance is therefore essential, ultimately allowing for more effective and targeted conservation initiatives. Here, the impact of different levels of forest disturbance on the space use, energetics, movement and behavior of 18 brown-throated sloths (Bradypus variegatus) were assessed in the South Caribbean of Costa Rica. A multi-scale framework was used to measure forest disturbance, including large-scale (landscape-level classifications) and fine-scale (within and surrounding individual home ranges) forest composition. Three landscape-level classifications were identified: primary forests (undisturbed), secondary forests (some disturbance, regenerating) and urban forests (high levels of disturbance and fragmentation). Finer-scale forest composition was determined using measurements of habitat structure and quality within and surrounding individual home ranges for each sloth (home range estimates were calculated using autocorrelated kernel density estimation [AKDE]). Measurements of forest quality included tree connectivity, density, diameter and height, species richness, and percentage of canopy cover. To determine space use, energetics, movement and behavior, six sloths in urban forests, seven sloths in secondary forests and five sloths in primary forests were tracked using a combination of Very High Frequency (VHF) radio transmitters and Global Positioning System (GPS) technology over an average period of 120 days. All sloths were also fitted with micro data-loggers (containing tri-axial accelerometers and pressure loggers) for an average of 30 days to allow for behavior-specific movement analyses (data analysis ongoing for data-loggers and primary forest sloths). Data-loggers included determination of activity budgets, circadian rhythms of activity and energy expenditure (using the vector of the dynamic body acceleration [VeDBA] as a proxy). Analyses to date indicate that home range size significantly increased with the level of forest disturbance. Female sloths inhabiting secondary forests averaged 0.67-hectare home ranges, while female sloths inhabiting urban forests averaged 1.93-hectare home ranges (estimates are represented by median values to account for the individual variation in home range size in sloths). Likewise, home range estimates for male sloths were 2.35 hectares in secondary forests and 4.83 in urban forests. Sloths in urban forests also used nearly double (median = 22.5) the number of trees as sloths in the secondary forest (median = 12). These preliminary data indicate that forest disturbance likely heightens the energetic requirements of sloths, a species already critically limited by low dispersal ability and rates of energy acquisition. Energetic and behavioral analyses from the data-loggers will be considered in the context of fine-scale forest composition measurements (i.e., habitat quality and structure) and are expected to reflect the observed home range and movement constraints. The implications of these results are far-reaching, presenting an opportunity to define a critical index of habitat connectivity for low dispersal species such as sloths.

Keywords: biodiversity conservation, forest disturbance, movement ecology, sloths

Procedia PDF Downloads 114
6 Catastrophic Health Expenditures: Evaluating the Effectiveness of Nepal's National Health Insurance Program Using Propensity Score Matching and Doubly Robust Methodology

Authors: Simrin Kafle, Ulrika Enemark

Abstract:

Catastrophic health expenditure (CHE) is a critical issue in low- and middle-income countries like Nepal, exacerbating financial hardship among vulnerable households. This study assesses the effectiveness of Nepal’s National Health Insurance Program (NHIP), launched in 2015, to reduce out-of-pocket (OOP) healthcare costs and mitigate CHE. Conducted in Pokhara Metropolitan City, the study used an analytical cross-sectional design, sampling 1276 households through a two-stage random sampling method. Data was collected via face-to-face interviews between May and October 2023. The analysis was conducted using SPSS version 29, incorporating propensity score matching to minimize biases and create comparable groups of enrolled and non-enrolled households in the NHIP. PSM helped reduce confounding effects by matching households with similar baseline characteristics. Additionally, a doubly robust methodology was employed, combining propensity score adjustment with regression modeling to enhance the reliability of the results. This comprehensive approach ensured a more accurate estimation of the impact of NHIP enrollment on CHE. Among the 1276 samples, 534 households (41.8%) were enrolled in NHIP. Of them, 84.3% of households renewed their insurance card, though some cited long waiting times, lack of medications, and complex procedures as barriers to renewal. Approximately 57.3% of households reported known diseases before enrollment, with 49.8% attending routine health check-ups in the past year. The primary motivation for enrollment was encouragement from insurance employees (50.2%). The data indicates that 12.5% of enrolled households experienced CHE versus 7.5% among non-enrolled. Enrollment into NHIP does not contribute to lower CHE (AOR: 1.98, 95% CI: 1.21-3.24). Key factors associated with increased CHE risk were presence of non-communicable diseases (NCDs) (AOR: 3.94, 95% CI: 2.10-7.39), acute illnesses/injuries (AOR: 6.70, 95% CI: 3.97-11.30), larger household size (AOR: 3.09, 95% CI: 1.81-5.28), and households below the poverty line (AOR: 5.82, 95% CI: 3.05-11.09). Other factors such as gender, education level, caste/ethnicity, presence of elderly members, and under-five children also showed varying associations with CHE, though not all were statistically significant. The study concludes that enrollment in the NHIP does not significantly reduce the risk of CHE. The reason for this could be inadequate coverage, where high-cost medicines, treatments, and transportation costs are not fully included in the insurance package, leading to significant out-of-pocket expenses. We also considered the long waiting time, lack of medicines, and complex procedures for the utilization of NHIP benefits, which might result in the underuse of covered services. Finally, gaps in enrollment and retention might leave certain households vulnerable to CHE despite the existence of NHIP. Key factors contributing to increased CHE include NCDs, acute illnesses, larger household sizes, and poverty. To improve the program’s effectiveness, it is recommended that NHIP benefits and coverage be expanded to better protect against high healthcare costs. Additionally, simplifying the renewal process, addressing long waiting times, and enhancing the availability of services could improve member satisfaction and retention. Targeted financial protection measures should be implemented for high-risk groups, and efforts should be made to increase awareness and encourage routine health check-ups to prevent severe health issues that contribute to CHE.

Keywords: catastrophic health expenditure, effectiveness, national health insurance program, Nepal

Procedia PDF Downloads 25
5 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 57
4 Modelling Farmer’s Perception and Intention to Join Cashew Marketing Cooperatives: An Expanded Version of the Theory of Planned Behaviour

Authors: Gospel Iyioku, Jana Mazancova, Jiri Hejkrlik

Abstract:

The “Agricultural Promotion Policy (2016–2020)” represents a strategic initiative by the Nigerian government to address domestic food shortages and the challenges in exporting products at the required quality standards. Hindered by an inefficient system for setting and enforcing food quality standards, coupled with a lack of market knowledge, the Federal Ministry of Agriculture and Rural Development (FMARD) aims to enhance support for the production and activities of key crops like cashew. By collaborating with farmers, processors, investors, and stakeholders in the cashew sector, the policy seeks to define and uphold high-quality standards across the cashew value chain. Given the challenges and opportunities faced by Nigerian cashew farmers, active participation in cashew marketing groups becomes imperative. These groups serve as essential platforms for farmers to collectively navigate market intricacies, access resources, share knowledge, improve output quality, and bolster their overall bargaining power. Through engagement in these cooperative initiatives, farmers not only boost their economic prospects but can also contribute significantly to the sustainable growth of the cashew industry, fostering resilience and community development. This study explores the perceptions and intentions of farmers regarding their involvement in cashew marketing cooperatives, utilizing an expanded version of the Theory of Planned Behaviour. Drawing insights from a diverse sample of 321 cashew farmers in Southwest Nigeria, the research sheds light on the factors influencing decision-making in cooperative participation. The demographic analysis reveals a diverse landscape, with a substantial presence of middle-aged individuals contributing significantly to the agricultural sector and cashew-related activities emerging as a primary income source for a substantial proportion (23.99%). Employing Structural Equation Modelling (SEM) with Maximum Likelihood Robust (MLR) estimation in R, the research elucidates the associations among latent variables. Despite the model’s complexity, the goodness-of-fit indices attest to the validity of the structural model, explaining approximately 40% of the variance in the intention to join cooperatives. Moral norms emerge as a pivotal construct, highlighting the profound influence of ethical considerations in decision-making processes, while perceived behavioural control presents potential challenges in active participation. Attitudes toward joining cooperatives reveal nuanced perspectives, with strong beliefs in enhanced connections with other farmers but varying perceptions on improved access to essential information. The SEM analysis establishes positive and significant effects of moral norms, perceived behavioural control, subjective norms, and attitudes on farmers’ intention to join cooperatives. The knowledge construct positively affects key factors influencing intention, emphasizing the importance of informed decision-making. A supplementary analysis using partial least squares (PLS) SEM corroborates the robustness of our findings, aligning with covariance-based SEM results. This research unveils the determinants of cooperative participation and provides valuable insights for policymakers and practitioners aiming to empower and support this vital demographic in the cashew industry.

Keywords: marketing cooperatives, theory of planned behaviour, structural equation modelling, cashew farmers

Procedia PDF Downloads 85
3 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data

Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard

Abstract:

Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.

Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset

Procedia PDF Downloads 7
2 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District

Authors: Veenapani Rajeev Verma

Abstract:

Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.

Keywords: effective coverage, principal component analysis, readiness index, universal health coverage

Procedia PDF Downloads 121
1 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 43