Search results for: optimal curve speed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6563

Search results for: optimal curve speed

233 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 83
232 Safety Considerations of Furanics for Sustainable Applications in Advanced Biorefineries

Authors: Anitha Muralidhara, Victor Engelen, Christophe Len, Pascal Pandard, Guy Marlair

Abstract:

Production of bio-based chemicals and materials from lignocellulosic biomass is gaining tremendous importance in advanced bio-refineries while aiming towards progressive replacement of petroleum based chemicals in transportation fuels and commodity polymers. One such attempt has resulted in the production of key furan derivatives (FD) such as furfural, HMF, MMF etc., via acid catalyzed dehydration (ACD) of C6 and C5 sugars, which are further converted into key chemicals or intermediates (such as Furandicarboxylic acid, Furfuryl alcohol etc.,). In subsequent processes, many high potential FD are produced, that can be converted into high added value polymers or high energy density biofuels. During ACD, an unavoidable polyfuranic byproduct is generated which is called humins. The family of FD is very large with varying chemical structures and diverse physicochemical properties. Accordingly, the associated risk profiles may largely vary. Hazardous Material (Haz-mat) classification systems such as GHS (CLP in the EU) and the UN TDG Model Regulations for transport of dangerous goods are one of the preliminary requirements for all chemicals for their appropriate classification, labelling, packaging, safe storage, and transportation. Considering the growing application routes of FD, it becomes important to notice the limited access to safety related information (safety data sheets available only for famous compounds such as HMF, furfural etc.,) in these internationally recognized haz-mat classification systems. However, these classifications do not necessarily provide information about the extent of risk involved when the chemical is used in any specific application. Factors such as thermal stability, speed of combustion, chemical incompatibilities, etc., can equally influence the safety profile of a compound, that are clearly out of the scope of any haz-mat classification system. Irrespective of the bio-based origin, FD has so far received inconsistent remarks concerning their toxicity profiles. With such inconsistencies, there is a fear that, a large family of FD may also follow extreme judgmental scenarios like ionic liquids, by ranking some compounds as extremely thermally stable, non-flammable, etc., Unless clarified, these messages could lead to misleading judgements while ranking the chemical based on its hazard rating. Safety is a key aspect in any sustainable biorefinery operation/facility, which is often underscored or neglected. To fill up these existing data gaps and to address ambiguities and discrepancies, the current study focuses on giving preliminary insights on safety assessment of FD and their potential targeted by-products. With the available information in the literature and obtained experimental results, physicochemical safety, environmental safety as well as (a scenario based) fire safety profiles of key FD, as well as side streams such as humins and levulinic acid, will be considered. With this, the study focuses on defining patterns and trends that gives coherent safety related information for existing and newly synthesized FD in the market for better functionality and sustainable applications.

Keywords: furanics, humins, safety, thermal and fire hazard, toxicity

Procedia PDF Downloads 149
231 Loading by Number Strategy for Commercial Vehicles

Authors: Ramalan Musa Yerima

Abstract:

The paper titled “loading by number” explained a strategy developed recently by Zonal Commanding Officer of the Federal Road Safety Corps of Nigeria, covering Sokoto, Kebbi and Zamfara States of Northern Nigeria. The strategy is aimed at reducing competition, which will invariably leads to the reduction in speed, reduction in dangerous driving, reduction in crash rate, reduction in injuries, reduction in property damages and reduction in death through road traffic crashes (RTC). This research paper presents a study focused on enhancing the safety of commercial vehicles. The background of this study highlights the alarming statistics related to commercial vehicle crashes in Nigeria with focus on Sokoto, Kebbi and Zamfara States, which often result in significant damage to property, loss of lives, and economic costs. The significance and aims is to investigate and propose effective strategy to enhance the safety of commercial vehicles. The study recognizes the pressing need for heightened safety measures in commercial transportation, as it impacts not only the well-being of drivers and passengers but also the overall public safety. To achieve the objectives, an examination of accident data, including causes and contributing factors, was performed to identify critical areas for improvement. The major finding of the study reveals that when competition comes into play within the realm of commercial driving, it has detrimental effects on road safety and resource management. Commercial drivers are pushed to complete their routes quickly, deliver goods on time or they pushed themselves to arrive quickly for more passengers and new contracts. This competitive environment, fuelled by internal and external pressures such as tight deadlines, poverty and greed, often leads to sad endings. The study recommend that if a strategy called loading by number is integrated with other multiple safety measures such as driver training programs, regulatory enforcement, and infrastructure improvements, commercial vehicle safety can be significantly enhanced. "Loading by Number” approach is design to ensure that the sequence of departure of drivers from motor park ‘A’ would be communicated to motor park officials of park ‘B’, which would be considered sequentially when giving them returning passengers, regardless of the first to arrive. In conclusion, this paper underscores the significance of improving the safety measures of commercial vehicles, as they are often larger and heavier than other vehicles on the road. Whenever they are involved in accidents, the consequences can be more severe. Commercial vehicles are also frequently involved in long-haul or interstate transportation, which means they cover longer distances and spend more time on the road. This increased exposure to driving conditions increases the probability of accidents occurring. By implementing the suggested measures, policymakers, transportation authorities, and industry stakeholders can work collectively towards ensuring a safer commercial transportation system.

Keywords: commercial, safety, strategy, transportation

Procedia PDF Downloads 38
230 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 125
229 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer

Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu

Abstract:

Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.

Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer

Procedia PDF Downloads 30
228 Strategy of Loading by Number for Commercial Vehicles

Authors: Ramalan Musa Yerima

Abstract:

The paper titled “Loading by number” explained a strategy developed recently by the Zonal Commanding Officer of the Federal Road Safety Corps of Nigeria, covering Sokoto, Kebbi and Zamfara States of Northern Nigeria. The strategy is aimed at reducing competition, which will invariably lead to a reduction in speed, reduction in dangerous driving, reduction in crash rate, reduction in injuries, reduction in property damages and reduction in death through road traffic crashes (RTC). This research paper presents a study focused on enhancing the safety of commercial vehicles. The background of this study highlights the alarming statistics related to commercial vehicle crashes in Nigeria with a focus on Sokoto, Kebbi and Zamfara States, which often result in significant damage to property, loss of lives, and economic costs. The significance and aims is to investigate and propose an effective strategy to enhance the safety of commercial vehicles. The study recognizes the pressing need for heightened safety measures in commercial transportation, as it impacts not only the well-being of drivers and passengers but also the overall public safety. To achieve the objectives, an examination of accident data, including causes and contributing factors, was performed to identify critical areas for improvement. The major finding of the study reveals that when competition comes into play within the realm of commercial driving, it has detrimental effects on road safety and resource management. Commercial drivers are pushed to complete their routes quickly and deliver goods on time, or they push themselves to arrive quickly for more passengers and new contracts. This competitive environment, fuelled by internal and external pressures such as tight deadlines, poverty and greed, often leads to sad endings. The study recommends that if a strategy called loading by number is integrated with other multiple safety measures, such as driver training programs, regulatory enforcement, and infrastructure improvements, commercial vehicle safety can be significantly enhanced. "Loading by Number” approach is designed to ensure that the sequence of departure of drivers from the motor park ‘A’ would be communicated to motor park officials of park ‘B’, which would be considered sequentially when giving them returning passengers, regardless of the first to arrive. In conclusion, this paper underscores the significance of improving the safety measures of commercial vehicles, as they are often larger and heavier than other vehicles on the road. Whenever they are involved in accidents, the consequences can be more severe. Commercial vehicles are also frequently involved in long-haul or interstate transportation, which means they cover longer distances and spend more time on the road. This increased exposure to driving conditions increases the probability of accidents occurring. By implementing the suggested measures, policymakers, transportation authorities, and industry stakeholders can work collectively toward ensuring a safer commercial transportation system.

Keywords: commercial, safety, strategy, transport

Procedia PDF Downloads 42
227 Evaluation of the Role of Advocacy and the Quality of Care in Reducing Health Inequalities for People with Autism, Intellectual and Developmental Disabilities at Sheffield Teaching Hospitals

Authors: Jonathan Sahu, Jill Aylott

Abstract:

Individuals with Autism, Intellectual and Developmental disabilities (AIDD) are one of the most vulnerable groups in society, hampered not only by their own limitations to understand and interact with the wider society, but also societal limitations in perception and understanding. Communication to express their needs and wishes is fundamental to enable such individuals to live and prosper in society. This research project was designed as an organisational case study, in a large secondary health care hospital within the National Health Service (NHS), to assess the quality of care provided to people with AIDD and to review the role of advocacy to reduce health inequalities in these individuals. Methods: The research methodology adopted was as an “insider researcher”. Data collection included both quantitative and qualitative data i.e. a mixed method approach. A semi-structured interview schedule was designed and used to obtain qualitative and quantitative primary data from a wide range of interdisciplinary frontline health care workers to assess their understanding and awareness of systems, processes and evidence based practice to offer a quality service to people with AIDD. Secondary data were obtained from sources within the organisation, in keeping with “Case Study” as a primary method, and organisational performance data were then compared against national benchmarking standards. Further data sources were accessed to help evaluate the effectiveness of different types of advocacy that were present in the organisation. This was gauged by measures of user and carer experience in the form of retrospective survey analysis, incidents and complaints. Results: Secondary data demonstrate near compliance of the Organisation with the current national benchmarking standard (Monitor Compliance Framework). However, primary data demonstrate poor knowledge of the Mental Capacity Act 2005, poor knowledge of organisational systems, processes and evidence based practice applied for people with AIDD. In addition there was poor knowledge and awareness of frontline health care workers of advocacy and advocacy schemes for this group. Conclusions: A significant amount of work needs to be undertaken to improve the quality of care delivered to individuals with AIDD. An operational strategy promoting the widespread dissemination of information may not be the best approach to deliver quality care and optimal patient experience and patient advocacy. In addition, a more robust set of standards, with appropriate metrics, needs to be developed to assess organisational performance which will stand the test of professional and public scrutiny.

Keywords: advocacy, autism, health inequalities, intellectual developmental disabilities, quality of care

Procedia PDF Downloads 194
226 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis

Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna

Abstract:

The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.

Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine

Procedia PDF Downloads 435
225 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 20
224 Restless Leg Syndrome as the Presenting Symptom of Neuroendocrine Tumor

Authors: Mustafa Cam, Nedim Ongun, Ufuk Kutluana

Abstract:

Introduction: Restless LegsSyndrome (RLS) is a common, under-recognized disorder disrupts sleep and diminishes quality of life (1). The most common conditions highly associated with RLS include renalfailure, iron and folic acid deficiency, peripheral neuropathy, pregnancy, celiacdisease, Crohn’sdiseaseandrarelymalignancy (2).Despite a clear relation between low peripheral iron and increased prevalence and severity of RLS, the prevalence and clinical significance of RLS in iron-deficientanemic populations is unknown (2). We report here a case of RLS due to iron deficiency in the setting of neuroendocrinetumor. Report of Case: A 35 year-old man was referred to our clinic with general weakness, weight loss (10 kg in 2 months)and 2-month history of uncomfortable sensations in his legs with urge to move, partially relieved by movement. The symptoms were presented very day, worsening in the evening; the discomfort forced the patient to getup and walk around at night. RLS was severe, with a score of 22 at the International RLS ratingscale. The patient had no past medical history. The patient underwent a complete set of blood analyses and the following ab normal values were found (normal limitswithinbrackets): hemoglobin 9.9 g/dl (14-18), MCV 70 fL (80-94), ferritin 3,5 ng/mL (13-150). Brain and spinemagnetic resonance imaging was normal. The patient consultated with gastroenterology clinic and gastointestinal systemendoscopy was performed for theetiology of the iron deficiency anemia. After the gastricbiopsy, results allowed us to reach the diagnosis of neuroen docrine tumor and the patient referred to oncology clinic. Discussion: The first important consideration from this case report is that the patient was referred to our clinic because of his severe RLS symptoms dramatically reducing his quality of life. However, our clinical study clearly demonstrated that RLS was not the primary disease. Considering the information available for this patient, we believe that the most likely possibility is that RLS was secondary to iron deficiency, a very well-known and established cause of RLS in theliterature (3,4). Neuroendocrine tumors (NETs) are rare epithelial neoplasms with neuroendocrine differentiation that most commonly originate in the lungs and gastrointestinal tract (5). NETs vary widely in their clinical presentation; symptoms are often nonspecific and can be mistaken for those of other more common conditions (6). 50% of patients with reported disease stage have either regional or distant metastases at diagnosis (7). Accurate and earlier NET diagnosis is the first step in shortening the time to optimal care and improved outcomes for patients (8). The most important message from this case report is that RLS symptoms can sometimes be thesign of a life-threatening condition. Conclusion: Careful and complete collection of clinical and laboratory data should be carried out in RLS patients. Inparticular, if RLS onset coincides with weight loss and iron deficieny anemia, gastricendos copy should be performed. It is known about that malignancy is a rare etiology in RLS patients and to our knowledge; it is the first case with neuro endocrine tumor presenting with RLS.

Keywords: neurology, neuroendocrine tumor, restless legs syndrome, sleep

Procedia PDF Downloads 263
223 The Antioxidant Activity of Grape Chkhaveri and Its Wine Cultivated in West Georgia (Adjaria)

Authors: Maia Kharadze, Indira Djaparidze, Maia Vanidze, Aleko Kalandia

Abstract:

Modern scientific world studies chemical components and antioxidant activity of different kinds of vines according to their breed purity and location. To our knowledge, this kind of research has not been conducted in Georgia yet. The object of our research was to study Chkhaveri vine, which is included in the oldest varieties of the Black Sea basin vine. We have studied different-altitude Chkaveri grapes, juice, and wine (half dry rose-colored produced with European technologies) and their technical markers, qualitative and quantitive composition of their biologically active compounds and their antioxidant activity. We were determining the amount of phenols using Folin-Ciocalteu reagent, Flavonoids, Catechins and Anthocyanins using Spectral method and antioxidant activity using DPPH method. Several compounds were identified using –HPLC-UV-Vis, UPLC-MS methods. Six samples of Chkhaveri species– 5, 300, 360, 380, 400, 780 meter altitudes were taken and analyzed. The sample taken from 360 m altitude is distinguished by its cluster mass (383.6 grams) and high amount of sugar (20.1%). The sample taken from the five-meter altitude is distinguished by having high acidity (0.95%). Unlike other grapes varieties, such concentration of sugar and relatively low levels of citric acid ultimately leads to Chkhaveri wine individuality. Biologically active compounds of Chkhaveri were researched in 2014, 2015, 2016. The amount of total phenols in samples of 2016 fruit varies from 976.7 to 1767.0 mg/kg. Amount of Anthocians is 721.2-1630.2 mg/kg, and the amount of Flavanoids varies from 300.6 to 825.5 mg/kg. Relatively high amount of anthocyanins was found in the Chkhaveri at 780-meter altitude - 1630.2 mg/kg. Accordingly, the amount of Phenols and Flavanoids is high- 1767.9 mg/kg and 825.5 mg/kg. These characteristics are low in samples gathered from 5 meters above sea level, Anthocyanins-721.2 mg/ kg, total Phenols-976.7 mg/ kg, and Flavanoids-300.6 mg/kg. The highest amount of bioactive compounds can be found in the Chkhaveri samples of high altitudes because with rising height environment becomes harsh, the plant has to develop a better immune system using Phenolic compounds. The technology that is used for the production of wine also plays a huge role in the composition of the final product. Optimal techniques of maceration and ageing were worked out. While squeezing Chkhaveri, there are no anthocyanins in the juice. However, the amount of Anthocyanins rises during maceration. After the fermentation of dregs, the amount of anthocyanins is 55%, 521.3 gm/l, total Phenols 80% 1057.7 mg/l and Flavanoids 23.5 mg/l. Antioxidant activity of samples was also determined using the effect of 50% inhibition of the samples. All samples have high antioxidant activity. For instance, in samples at 780 meters above the sea-level antioxidant activity was 53.5%. It is relatively high compared to the sample at 5 m above sea-level with the antioxidant activity of 30.5%. Thus, there is a correlation between the amount Anthocyanins and antioxidant activity. The designated project has been fulfilled by financial support of the Georgia National Science Foundation (Grant AP/96/13, Grant 216816), Any idea in this publication is possessed by the author and may not represent the opinion of the Georgia National Science Foundation.

Keywords: antioxidants, bioactive content, wine, chkhaveri

Procedia PDF Downloads 204
222 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures

Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju

Abstract:

Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.

Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability

Procedia PDF Downloads 40
221 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 244
220 Remote Radiation Mapping Based on UAV Formation

Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov

Abstract:

High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.

Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation

Procedia PDF Downloads 63
219 Direct Contact Ultrasound Assisted Drying of Mango Slices

Authors: E. K. Mendez, N. A. Salazar, C. E. Orrego

Abstract:

There is undoubted proof that increasing the intake of fruit lessens the risk of hypertension, coronary heart disease, stroke, and probable evidence that lowers the risk of cancer. Proper fruit drying is an excellent alternative to make their shelf-life longer, commercialization easier, and ready-to-eat healthy products or ingredients. The conventional way of drying is by hot air forced convection. However, this process step often requires a very long residence time; furthermore, it is highly energy consuming and detrimental to the product quality. Nowadays, power ultrasound (US) technic has been considered as an emerging and promising technology for industrial food processing. Most of published works dealing with drying food assisted by US have studied the effect of ultrasonic pre-treatment prior to air-drying on food and the airborne US conditions during dehydration. In this work a new approach was tested taking in to account drying time and two quality parameters of mango slices dehydrated by convection assisted by 20 KHz power US applied directly using a holed plate as product support and sound transmitting surface. During the drying of mango (Mangifera indica L.) slices (ca. 6.5 g, 0.006 m height and 0.040 m diameter), their weight was recorded every hour until final moisture content (10.0±1.0 % wet basis) was reached. After previous tests, optimization of three drying parameters - frequencies (2, 5 and 8 minutes each half-hour), air temperature (50-55-60⁰C) and power (45-70-95W)- was attempted by using a Box–Behnken design under the response surface methodology for the optimal drying time, color parameters and rehydration rate of dried samples. Assays involved 17 experiments, including a quintuplicate of the central point. Dried samples with and without US application were packed in individual high barrier plastic bags under vacuum, and then stored in the dark at 8⁰C until their analysis. All drying assays and sample analysis were performed in triplicate. US drying experimental data were fitted with nine models, among which the Verna model resulted in the best fit with R2 > 0.9999 and reduced χ2 ≤ 0.000001. Significant reductions in drying time were observed for the assays that used lower frequency and high US power. At 55⁰C, 95 watts and 2 min/30 min of sonication, 10% moisture content was reached in 211 min, as compared with 320 min for the same test without the use of US (blank). Rehydration rates (RR), defined as the ratio of rehydrated sample weight to that of dry sample and measured, was also larger than those of blanks and, in general, the higher the US power, the greater the RR. The direct contact and intermittent US treatment of mango slices used in this work improve drying rates and dried fruit rehydration ability. This technique can thus be used to reduce energy processing costs and the greenhouse gas emissions of fruit dehydration.

Keywords: ultrasonic assisted drying, fruit drying, mango slices, contact ultrasonic drying

Procedia PDF Downloads 321
218 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming

Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute

Abstract:

Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.

Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon

Procedia PDF Downloads 201
217 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 106
216 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 166
215 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 153
214 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment

Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen

Abstract:

Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.

Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time

Procedia PDF Downloads 55
213 Using Virtual Reality Exergaming to Improve Health of College Students

Authors: Juanita Wallace, Mark Jackson, Bethany Jurs

Abstract:

Introduction: Exergames, VR games used as a form of exercise, are being used to reduce sedentary lifestyles in a vast number of populations. However, there is a distinct lack of research comparing the physiological response during VR exergaming to that of traditional exercises. The purpose of this study was to create a foundationary investigation establishing changes in physiological responses resulting from VR exergaming in a college aged population. Methods: In this IRB approved study, college aged students were recruited to play a virtual reality exergame (Beat Saber) on the Oculus Quest 2 (Facebook, 2021) in either a control group (CG) or training group (TG). Both groups consisted of subjects who were not habitual users of virtual reality. The CG played VR one time per week for three weeks and the TG played 150 min/week three weeks. Each group played the same nine Beat Saber songs, in a randomized order, during 30 minute sessions. Song difficulty was increased during play based on song performance. Subjects completed a pre- and posttests at which the following was collected: • Beat Saber Game Metrics: song level played, song score, number of beats completed per song and accuracy (beats completed/total beats) • Physiological Data: heart rate (max and avg.), active calories • Demographics Results: A total of 20 subjects completed the study; nine in the CG (3 males, 6 females) and 11 (5 males, 6 females) in the TG. • Beat Saber Song Metrics: The TG improved performance from a normal/hard difficulty to hard/expert. The CG stayed at the normal/hard difficulty. At the pretest there was no difference in game accuracy between groups. However, at the posttest the CG had a higher accuracy. • Physiological Data (Table 1): Average heart rates were similar between the TG and CG at both the pre- and posttest. However, the TG expended more total calories. Discussion: Due to the lack of peer reviewed literature on c exergaming using Beat Saber, the results of this study cannot be directly compared. However, the results of this study can be compared with the previously established trends for traditional exercise. In traditional exercise, an increase in training volume equates to increased efficiency at the activity. The TG should naturally increase in difficulty at a faster rate than the CG because they played 150 hours per week. Heart rate and caloric responses also increase during traditional exercise as load increases (i.e. speed or resistance). The TG reported an increase in total calories due to a higher difficulty of play. The song accuracy decreases in the TG can be explained by the increased difficulty of play. Conclusion: VR exergaming is comparable to traditional exercise for loads within the 50-70% of maximum heart rate. The ability to use VR for health could motivate individuals who do not engage in traditional exercise. In addition, individuals in health professions can and should promote VR exergaming as a viable way to increase physical activity and improve health in their clients/patients.

Keywords: virtual reality, exergaming, health, heart rate, wellness

Procedia PDF Downloads 160
212 Description of Decision Inconsistency in Intertemporal Choices and Representation of Impatience as a Reflection of Irrationality: Consequences in the Field of Personalized Behavioral Finance

Authors: Roberta Martino, Viviana Ventre

Abstract:

Empirical evidence has, over time, confirmed that the behavior of individuals is inconsistent with the descriptions provided by the Discounted Utility Model, an essential reference for calculating the utility of intertemporal prospects. The model assumes that individuals calculate the utility of intertemporal prospectuses by adding up the values of all outcomes obtained by multiplying the cardinal utility of the outcome by the discount function estimated at the time the outcome is received. The trend of the discount function is crucial for the preferences of the decision maker because it represents the perception of the future, and its trend causes temporally consistent or temporally inconsistent preferences. In particular, because different formulations of the discount function lead to various conclusions in predicting choice, the descriptive ability of models with a hyperbolic trend is greater than linear or exponential models. Suboptimal choices from any time point of view are the consequence of this mechanism, the psychological factors of which are encapsulated in the discount rate trend. In addition, analyzing the decision-making process from a psychological perspective, there is an equivalence between the selection of dominated prospects and a degree of impatience that decreases over time. The first part of the paper describes and investigates the anomalies of the discounted utility model by relating the cognitive distortions of the decision-maker to the emotional factors that are generated during the evaluation and selection of alternatives. Specifically, by studying the degree to which impatience decreases, it’s possible to quantify how the psychological and emotional mechanisms of the decision-maker result in a lack of decision persistence. In addition, this description presents inconsistency as the consequence of an inconsistent attitude towards time-delayed choices. The second part of the paper presents an experimental phase in which we show the relationship between inconsistency and impatience in different contexts. Analysis of the degree to which impatience decreases confirms the influence of the decision maker's emotional impulses for each anomaly in the utility model discussed in the first part of the paper. This work provides an application in the field of personalized behavioral finance. Indeed, the numerous behavioral diversities, evident even in the degrees of decrease in impatience in the experimental phase, support the idea that optimal strategies may not satisfy individuals in the same way. With the aim of homogenizing the categories of investors and to provide a personalized approach to advice, the results proven in the experimental phase are used in a complementary way with the information in the field of behavioral finance to implement the Analytical Hierarchy Process model in intertemporal choices, useful for strategic personalization. In the construction of the Analytic Hierarchy Process, the degree of decrease in impatience is understood as reflecting irrationality in decision-making and is therefore used for the construction of weights between anomalies and behavioral traits.

Keywords: analytic hierarchy process, behavioral finance, financial anomalies, impatience, time inconsistency

Procedia PDF Downloads 43
211 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 420
210 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows

Authors: S. Pradhan, V. Kumaran

Abstract:

Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.

Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow

Procedia PDF Downloads 375
209 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 234
208 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 34
207 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries

Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni

Abstract:

In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.

Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm

Procedia PDF Downloads 98
206 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity

Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido

Abstract:

Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.

Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens

Procedia PDF Downloads 264
205 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 103
204 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 288