Search results for: linear congruential algorithm
728 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering
Authors: Emiel Caron
Abstract:
Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics
Procedia PDF Downloads 194727 Association of Sleep Duration and Insomnia with Body Mass Index Among Brazilian Adults
Authors: Giovana Longo-Silva, Risia Cristina Egito de Menezes, Renan Serenini, Márcia de Oliveira Lima, Júlia Souza de Melo, Larissa de Lima Soares
Abstract:
Introduction: Sleep duration and quality have been increasingly recognized as important factors affecting overall health and well-being, including their potential impact on body weight and composition. Previous research has shown inconsistent results regarding the association between sleep patterns and body mass index (BMI), particularly among diverse populations such as Brazilian adults. Understanding these relationships is crucial for developing targeted interventions to address obesity and related health issues. Objective: This study aimed to investigate the association between sleep duration, insomnia, and BMI among Brazilian adults using data from a large national survey focused on chronic nutrition and sleep habits. Materials and Methods: The study included 2050 participants from a population-based virtual survey. BMI was calculated using self-reported weight and height measurements. Participants also reported usual bedtime and wake time on weekdays and weekends and whether they experienced symptoms of insomnia. The average sleep duration across the entire week was calculated as follows: [(5×sleep duration on weekdays) + (2×sleep duration on weekends)]/7. Linear regression analyses were conducted to assess the association between sleep duration, insomnia, and BMI, adjusting for potential confounding factors, including age, sex, marital status, physical exercise duration, and diet quality. Results: After adjusting for confounding variables, the study found that BMI decreased by 0.19 kg/m² for each additional hour of sleep duration (95% CI = -0.37, -0.02; P = 0.03). Conversely, individuals with insomnia had a higher BMI, with an increase of 0.75 kg/m² (95% CI = 0.28, 1.22; P = 0.002) compared to those without insomnia. Conclusions: The findings suggest a significant association between sleep duration, insomnia, and BMI among Brazilian adults. Longer sleep duration was associated with lower BMI, while insomnia was associated with higher BMI. These results underscore the importance of considering sleep patterns in strategies aimed at preventing and managing obesity in this population. Further research is needed to explore the underlying mechanisms and potential interventions targeting sleep-related factors to promote healthier body weight outcomes.Keywords: sleep, obesity, chronobiology, nutrition
Procedia PDF Downloads 45726 Entry, Descent and Landing System Design and Analysis of a Small Platform in Mars Environment
Authors: Daniele Calvi, Loris Franchi, Sabrina Corpino
Abstract:
Thanks to the latest Mars mission, the planetary exploration has made enormous strides over the past ten years increasing the interest of the scientific community and beyond. These missions aim to fulfill many complex operations which are of paramount importance to mission success. Among these, a special mention goes to the Entry, Descent and Landing (EDL) functions which require a dedicated system to overcome all the obstacles of these critical phases. The general objective of the system is to safely bring the spacecraft from orbital conditions to rest on the planet surface, following the designed mission profile. For this reason, this work aims to develop a simulation tool integrating the re-entry trajectory algorithm in order to support the EDL design during the preliminary phase of the mission. This tool was used on a reference unmanned mission, whose objective is finding bio-evidence and bio-hazards on Martian (sub)surface in order to support the future manned mission. Regarding the concept of operations (CONOPS) of the mission, it concerns the use of Space Penetrator Systems (SPS) that will descend on Mars surface following a ballistic fall and will penetrate the ground after the impact with the surface (around 50 and 300 cm of depth). Each SPS shall contain all the instrumentation required to sample and make the required analyses. Respecting the low-cost and low-mass requirements, as result of the tool, an Entry Descent and Impact (EDI) system based on inflatable structure has been designed. Hence, a solution could be the one chosen by Finnish Meteorological Institute in the Mars Met-Net mission, using an inflatable Thermal Protection System (TPS) called Inflatable Braking Unit (IBU) and an additional inflatable decelerator. Consequently, there are three configurations during the EDI: at altitude of 125 km the IBU is inflated at speed 5.5 km/s; at altitude of 16 km the IBU is jettisoned and an Additional Inflatable Braking Unit (AIBU) is inflated; Lastly at about 13 km, the SPS is ejected from AIBU and it impacts on the Martian surface. Since all parameters are evaluated, it is possible to confirm that the chosen EDI system and strategy verify the requirements of the mission.Keywords: EDL, Mars, mission, SPS, TPS
Procedia PDF Downloads 169725 Predictability of Thermal Response in Housing: A Case Study in Australia, Adelaide
Authors: Mina Rouhollahi, J. Boland
Abstract:
Changes in cities’ heat balance due to rapid urbanization and the urban heat island (UHI) have increased energy demands for space cooling and have resulted in uncomfortable living conditions for urban residents. Climate resilience and comfortable living spaces can be addressed through well-designed urban development. The sustainable housing can be more effective in controlling high levels of urban heat. In Australia, to mitigate the effects of UHIs and summer heat waves, one solution to sustainable housing has been the trend to compact housing design and the construction of energy efficient dwellings. This paper analyses whether current housing configurations and orientations are effective in avoiding increased demands for air conditioning and having an energy efficient residential neighborhood. A significant amount of energy is consumed to ensure thermal comfort in houses. This paper reports on the modelling of heat transfer within the homes using the measurements of radiation, convection and conduction between exterior/interior wall surfaces and outdoor/indoor environment respectively. The simulation was tested on selected 7.5-star energy efficient houses constructed of typical material elements and insulation in Adelaide, Australia. The chosen design dwellings were analyzed in extremely hot weather through one year. The data were obtained via a thermal circuit to accurately model the fundamental heat transfer mechanisms on both boundaries of the house and through the multi-layered wall configurations. The formulation of the Lumped capacitance model was considered in discrete time steps by adopting a non-linear model method. The simulation results focused on the effects of orientation of the solar radiation on the dynamic thermal characteristics of the houses orientations. A high star rating did not necessarily coincide with a decrease in peak demands for cooling. A more effective approach to avoid increasing the demands for air conditioning and energy may be to integrate solar–climatic data to evaluate the performance of energy efficient houses.Keywords: energy-efficient residential building, heat transfer, neighborhood orientation, solar–climatic data
Procedia PDF Downloads 133724 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures
Authors: Samir Chawdhury, Guido Morgenthal
Abstract:
The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis
Procedia PDF Downloads 311723 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 250722 Sensing Endocrine Disrupting Chemicals by Virus-Based Structural Colour Nanostructure
Authors: Lee Yujin, Han Jiye, Oh Jin-Woo
Abstract:
The adverse effects of endocrine disrupting chemicals (EDCs) has attracted considerable public interests. The benzene-like EDCs structure mimics the mechanisms of hormones naturally occurring in vivo, and alters physiological function of the endocrine system. Although, some of the most representative EDCs such as polychlorinated biphenyls (PCBs) and phthalates compounds already have been prohibited to produce and use in many countries, however, PCBs and phthalates in plastic products as flame retardant and plasticizer are still circulated nowadays. EDCs can be released from products while using and discarding, and it causes serious environmental and health issues. Here, we developed virus-based structurally coloured nanostructure that can detect minute EDCs concentration sensitively and selectively. These structurally coloured nanostructure exhibits characteristic angel-independent colors due to the regular virus bundle structure formation through simple pulling technique. The designed number of different colour bands can be formed through controlling concentration of virus solution and pulling speed. The virus, M-13 bacteriophage, was genetically engineered to react with specific ECDs, typically PCBs and phthalates. M-13 bacteriophage surface (pVIII major coat protein) was decorated with benzene derivative binding peptides (WHW) through phage library method. In the initial assessment, virus-based color sensor was exposed to several organic chemicals including benzene, toluene, phenol, chlorobenzene, and phthalic anhydride. Along with the selectivity evaluation of virus-based colour sensor, it also been tested for sensitivity. 10 to 300 ppm of phthalic anhydride and chlorobenzene were detected by colour sensor, and showed the significant sensitivity with about 90 of dissociation constant. Noteworthy, all measurements were analyzed through principal component analysis (PCA) and linear discrimination analysis (LDA), and exhibited clear discrimination ability upon exposure to 2 categories of EDCs (PCBs and phthalates). Because of its easy fabrication, high sensitivity, and the superior selectivity, M-13 bacteriophage-based color sensor could be a simple and reliable portable sensing system for environmental monitoring, healthcare, social security, and so on.Keywords: M-13 bacteriophage, colour sensor, genetic engineering, EDCs
Procedia PDF Downloads 242721 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 139720 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 222719 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 157718 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis
Authors: Srinaath Anbu Durai, Wang Zhaoxia
Abstract:
Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks
Procedia PDF Downloads 117717 Pathway to Sustainable Shipping: Electric Ships
Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang
Abstract:
Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.Keywords: cost reduction, electric ship, environmental protection, sustainable shipping
Procedia PDF Downloads 78716 Study Secondary Particle Production in Carbon Ion Beam Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Ensuring accurate radiotherapy with carbon therapy requires precise monitoring of radiation dose distribution within the patient's body. This monitoring is essential for targeted tumor treatment, minimizing harm to healthy tissues, and improving treatment effectiveness while lowering side effects. In our investigation, we employed a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo simulations. Initially, Geant4 simulations were utilized to extract the initial positions of secondary particles formed during interactions between carbon ions and water. These particles included protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we studied the relationship between the carbon ion beam and these secondary particles. Interaction Vertex Imaging (IVI) is valuable for monitoring dose distribution in carbon therapy. It provides details about the positions and amounts of secondary particles, particularly protons. The IVI method depends on charged particles produced during ion fragmentation to gather information about the range by reconstructing particle trajectories back to their point of origin, referred to as the vertex. In our simulations regarding carbon ion therapy, we observed a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the target's unique elongated geometry, which hindered the straightforward transmission of forward-generated protons. Consequently, the limited protons that emerged mostly originated from points close to the target entrance. The trajectories of fragments (protons) were approximated as straight lines, and a beam back-projection algorithm, using recorded interaction positions in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitoring of radiation dose, interaction vertex imaging
Procedia PDF Downloads 84715 Combine Resection of Talocalcaneal Tarsal Coalition and Calcaneal Lengthening Osteotomy. Short-to-Intermediate Term Results
Authors: Naum Simanovsky, Vladimir Goldman, Michael Zaidman
Abstract:
Background: The optimal algorithm for the management of symptomatic tarsal coalition is still under discussion in pediatric literature. It's debatable what surgical steps are essential to achieve the best outcome. Method: The investigators retrospectively reviewed the records of twelve patients with symptomatic tarsal coalition that were treated operatively between 2017 and 2019. Only painful flat feet were operated. Two patients were excluded from the study due to lack of sufficient follow-up. Ten of eleven feet were treated with the combination of calcaneal lengthening osteotomy (CLO) and resection of coalition (RC). Only one foot was operated with CLO alone. In half of our patients, Achilles lengthening was performed. For two children, medial plication was added. Short leg cast was applied to all children for 6-8 weeks, and soft shoe insoles for medial arch support were prescribed after. Demographic, clinical, and radiographic records were reviewed. The outcome was evaluated using American Orthopedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results: There were seven boys and three girls. The mean age at the time of surgery was 13.9 (range 12 to 17) years, and the mean follow-up was 18 (range 8 to 34) months. The early complications included one superficial wound infection and dehiscence. Late complication includes two children with residual forefoot supination. None of our patients required additional operations during the follow-up period. All feet achieved complete deformity correction or dramatic improvement. In the last follow-up, seven feet were painless, and four children had some mild pain after intensive activities. All feet achieved excellent and good scoring on AOFAS. Conclusions: Many patients with talocalcaneal coalition also have rigid or stiff, painful, flat feet. For these patients, the resection of coalition with concomitant CLO can be safely recommended.Keywords: Tarsal coalition, calcaneal lengthening osteotomy., flat foot, coalition resection
Procedia PDF Downloads 65714 RPM-Synchronous Non-Circular Grinding: An Approach to Enhance Efficiency in Grinding of Non-Circular Workpieces
Authors: Matthias Steffan, Franz Haas
Abstract:
The production process grinding is one of the latest steps in a value-added manufacturing chain. Within this step, workpiece geometry and surface roughness are determined. Up to this process stage, considerable costs and energy have already been spent on components. According to the current state of the art, therefore, large safety reserves are calculated in order to guarantee a process capability. Especially for non-circular grinding, this fact leads to considerable losses of process efficiency. With present technology, various non-circular geometries on a workpiece must be grinded subsequently in an oscillating process where X- and Q-axis of the machine are coupled. With the approach of RPM-Synchronous Noncircular Grinding, such workpieces can be machined in an ordinary plung grinding process. Therefore, the workpieces and the grinding wheels revolutionary rate are in a fixed ratio. A non-circular grinding wheel is used to transfer its geometry onto the workpiece. The authors use a worldwide unique machine tool that was especially designed for this technology. Highest revolution rates on the workpiece spindle (up to 4500 rpm) are mandatory for the success of this grinding process. This grinding approach is performed in a two-step process. For roughing, a highly porous vitrified bonded grinding wheel with medium grain size is used. It ensures high specific material removal rates for efficiently producing the non-circular geometry on the workpiece. This process step is adapted by a force control algorithm, which uses acquired data from a three-component force sensor located in the dead centre of the tailstock. For finishing, a grinding wheel with a fine grain size is used. Roughing and finishing are performed consecutively among the same clamping of the workpiece with two locally separated grinding spindles. The approach of RPM-Synchronous Noncircular Grinding shows great efficiency enhancement in non-circular grinding. For the first time, three-dimensional non-circular shapes can be grinded that opens up various fields of application. Especially automotive industries show big interest in the emerging trend in finishing machining.Keywords: efficiency enhancement, finishing machining, non-circular grinding, rpm-synchronous grinding
Procedia PDF Downloads 283713 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: heuristic, MIP model, remedial course, school, timetabling
Procedia PDF Downloads 605712 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States
Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu
Abstract:
Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation
Procedia PDF Downloads 101711 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127710 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 198709 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys
Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti
Abstract:
The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.Keywords: arc melting, core level shift, ESCA potential model, valence band
Procedia PDF Downloads 380708 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper
Authors: Nicolas Bae, Theodore L. Karavasilis
Abstract:
Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design
Procedia PDF Downloads 193707 The Relationship between Celebrity Worship and Religiosity: A Study in Turkish Context
Authors: Saadet Taşyürek Demirel, Halide Sena Koçyiğit, Rümeysa Fatma Çetin
Abstract:
Celebrity worship, characterized by excessive admiration and devotion towards public figures, often mirrors elements of religious fervor. This study delves into the intricate connection between celebrity worship and religiosity, particularly within the Turkish cultural context, where Islamic values predominantly shape societal norms. The investigation involves the adaptation of the Celebrity Attitude Scale into Turkish and scrutinizes the interplay between young individuals' religiosity and their extreme adulation of celebrities. Additionally, the study explores potential moderating factors, such as age and gender, that might influence this relationship. A cohort of 197 young adults, aged 19 to 30, participated in this research, responding to self-administered questionnaires that assessed their attitudes towards celebrities using the adapted Celebrity Attitude Scale, along with their self-reported religiosity. The anticipated relationship between religiosity and celebrity worship is hypothesized to exhibit a non-linear pattern. Specifically, we expect religiosity to positively predict celebrity worship tendencies among individuals with minimal to moderate religiosity levels. Conversely, a negative association between religiosity and celebrity worship is expected to manifest among participants exhibiting moderate to high levels of religiosity. The findings of this study will contribute to the comprehension of the intricate dynamics between celebrity worship and religiosity, offering insights specifically within the Turkish cultural context. By shedding light on this relationship, the study aims to enhance our understanding of the multifaceted influences that shape individuals' perceptions and behaviors towards both celebrities and religious inclinations. Methodology of the study: A quantitative research will be conducted, where the factor analysis and correlational method will be used. The factor structure of the scale will be determined with exploratory and confirmatory factor analysis. The reliability, internal consistency, Objectives of the study: This study examines the relationship between religiosity and celebrity worship by young adults in the Turkish context. The other aim of the study is to assess the Turkish validity and reliability of the Celebrity Attitude Scale and contribute it to the literature. Main Contributions of the study: The study aims to introduce celebrity worship to Turkish literature, assess the Celebrity Attitude Scale's reliability in a Turkish sample, explore manifestations of celebrity worship, and examine its link to religiosity. This research addresses the lack of Turkish sources on celebrity worship and extends understanding of the concept.Keywords: celebrity, worship, religiosity, god
Procedia PDF Downloads 84706 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils
Authors: Reza Ziaie Moayed, Saeideh Mohammadi
Abstract:
Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.Keywords: bearing capacity, cement-treated soils, FLAC 3D, pile
Procedia PDF Downloads 126705 Algae Biofertilizers Promote Sustainable Food Production and Nutrient Efficiency: An Integrated Empirical-Modeling Study
Authors: Zeenat Rupawalla, Nicole Robinson, Susanne Schmidt, Sijie Li, Selina Carruthers, Elodie Buisset, John Roles, Ben Hankamer, Juliane Wolf
Abstract:
Agriculture has radically changed the global biogeochemical cycle of nitrogen (N). Fossil fuel-enabled synthetic N-fertiliser is a foundation of modern agriculture but applied to soil crops only use about half of it. To address N-pollution from cropping and the large carbon and energy footprint of N-fertiliser synthesis, new technologies delivering enhanced energy efficiency, decarbonisation, and a circular nutrient economy are needed. We characterised algae fertiliser (AF) as an alternative to synthetic N-fertiliser (SF) using empirical and modelling approaches. We cultivated microalgae in nutrient solution and modelled up-scaled production in nutrient-rich wastewater. Over four weeks, AF released 63.5% of N as ammonium and nitrate, and 25% of phosphorous (P) as phosphate to the growth substrate, while SF released 100% N and 20% P. To maximise crop N-use and minimise N-leaching, we explored AF and SF dose-response-curves with spinach in glasshouse conditions. AF-grown spinach produced 36% less biomass than SF-grown plants due to AF’s slower and linear N-release, while SF resulted in 5-times higher N-leaching loss than AF. Optimised blends of AF and SF boosted crop yield and minimised N-loss due to greater synchrony of N-release and crop uptake. Additional benefits of AF included greener leaves, lower leaf nitrate concentration, and higher microbial diversity and water holding capacity in the growth substrate. Life-cycle-analysis showed that replacing the most effective SF dosage with AF lowered the carbon footprint of fertiliser production from 2.02 g CO₂ (C-producing) to -4.62 g CO₂ (C-sequestering), with a further 12% reduction when AF is produced on wastewater. Embodied energy was lowest for AF-SF blends and could be reduced by 32% when cultivating algae on wastewater. We conclude that (i) microalgae offer a sustainable alternative to synthetic N-fertiliser in spinach production and potentially other crop systems, and (ii) microalgae biofertilisers support the circular nutrient economy and several sustainable development goals.Keywords: bioeconomy, decarbonisation, energy footprint, microalgae
Procedia PDF Downloads 138704 Dietary Patterns and Adherence to the Mediterranean Diet among Breast Cancer Female Patients in Lebanon: A Cross-Sectional Study
Authors: Yasmine Aridi, Lara Nasreddine, Maya Khalil, Arafat Tfayli, Anas Mugharbel, Farah Naja
Abstract:
Breast cancer is the most commonly diagnosed cancer site among women worldwide and the second most common cause of cancer mortality. Breast cancer rates differ vastly between geographical areas, countries, and within the same country. In Lebanon, the proportion of breast cancer to all other sites of tumor is 38.2%; these rates are still lower than those observed worldwide, but remain the highest among Arab countries. Studies and evidence based reviews show a strong association between breast cancer development and prognosis and dietary habits, specifically the Mediterranean diet (MD). As such, the aim of this study is to examine dietary patterns and adherence to the MD among a sample of 182 breast cancer female patients in Beirut, Lebanon. Subjects were recruited from two major hospitals; a private medical center and a public hospital. All subjects were administered two questionnaires: socio- demographics and Mediterranean diet adherence. Five Mediterranean scores were calculated: MS, MSDPS, PMDI, PREDIMED and DDS. The mean age of the participants was 53.78 years. The overall adherence to the Mediterranean diet (MD) was low since the sample means of 3 out of the 5 calculated scores were less than the scores’ medians. Given that 4 out of the 5 Mediterranean scores significantly varied between the recruitment sites, women in the private medical center were found to adhere more to the MD. Our results also show that the majority of the sample population’s intakes are exceeding the recommendations for total and saturated fat, while meeting the requirements for fiber, EPA, DHA and Linolenic Acid. Participants in the private medical center were consuming significantly more calories, carbohydrates, fiber, sugar, Lycopene, Calcium, Iron and Folate and less fat. After conducting multivariate linear regression analyses, the following significant results were observed: positive associations between MD (CPMDI, PREDIMED) and monthly income & current state of health, while negative associations between MD (MSDPS, PREDIMED) and age & employment status. Our findings indicated a low overall adherence to the MD and identified factors associated with it; which suggests a need to address dietary habits among BC patients in Lebanon, specifically encouraging them to adhere to their traditional Mediterranean diet.Keywords: Adherence, Breast cancer, Dietary patterns, Mediterranean diet, Nutrition
Procedia PDF Downloads 422703 Location Uncertainty – A Probablistic Solution for Automatic Train Control
Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland
Abstract:
New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.Keywords: ERTMS, CBTC, ATP, ATO
Procedia PDF Downloads 410702 Prevalence of Pretreatment Drug HIV-1 Mutations in Moscow, Russia
Authors: Daria Zabolotnaya, Svetlana Degtyareva, Veronika Kanestri, Danila Konnov
Abstract:
An adequate choice of the initial antiretroviral treatment determines the treatment efficacy. In the clinical guidelines in Russia non-nucleoside reverse transcriptase inhibitors (NNRTIs) are still considered to be an option for first-line treatment while pretreatment drug resistance (PDR) testing is not routinely performed. We conducted a cohort retrospective study in HIV-positive treatment naïve patients of the H-clinic (Moscow, Russia) who performed PDR testing from July 2017 to November 2021. All the information was obtained from the medical records anonymously. We analyzed the mutations in reverse transcriptase and protease genes. RT-sequences were obtained by AmpliSens HIV-Resist-Seq kit. Drug resistance was defined using the HIVdb Program v. 8.9-1. PDR was estimated using the Stanford algorithm. Descriptive statistics were performed in Excel (Microsoft Office, 2019). A total of 261 HIV-1 infected patients were enrolled in the study including 197 (75.5%) male and 64 (24.5%) female. The mean age was 34.6±8.3 years. The median CD4 count – 521 cells/µl (IQR 367-687 cells/µl). Data on risk factors of HIV-infection were scarce. The total quantity of strains containing mutations in the reverse transcriptase gene was 75 (28.7%). From these 5 (1.9%) mutations were associated with PDR to nucleoside reverse transcriptase inhibitors (NRTIs) and 30 (11.5%) – with PDR to NNRTIs. The number of strains with mutations in protease gene was 43 (16.5%), from these only 3 (1.1%) mutations were associated with resistance to protease inhibitors. For NNRTIs the most prevalent PDR mutations were E138A, V106I. Most of the HIV variants exhibited a single PDR mutation, 2 were found in 3 samples. Most of HIV variants with PDR mutation displayed a single drug class resistance mutation. 2/37 (5.4%) strains had both NRTIs and NNRTIs mutations. There were no strains identified with PDR mutations to all three drug classes. Though earlier data demonstrated a lower level of PDR in HIV treatment naïve population in Russia and our cohort can be not fully representative as it is taken from the private clinic, it reflects the trend of increasing PDR especially to NNRTIs. Therefore, we consider either pretreatment testing or giving the priority to other drugs as first-line treatment necessary.Keywords: HIV, resistance, mutations, treatment
Procedia PDF Downloads 94701 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)
Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin
Abstract:
The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory
Procedia PDF Downloads 108700 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 173699 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses
Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh
Abstract:
Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications
Procedia PDF Downloads 317