Search results for: parameters for training needs assessment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16705

Search results for: parameters for training needs assessment

475 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 130
474 A 2-D and 3-D Embroidered Textrode Testing Framework Adhering to ISO Standards

Authors: Komal K., Cleary F., Wells J S.G., Bennett L

Abstract:

Smart fabric garments enable various monitoring applications across sectors such as healthcare, sports and fitness, and the military. Healthcare smart garments monitoring EEG, EMG, and ECG rely on the use of electrodes (dry or wet). However, such electrodes, when used for long-term monitoring, can cause discomfort and skin irritation for the wearer because of their inflexible structure and weight. Ongoing research has been investigating textile-based electrodes (textrodes) in order to provide more comfortable and usable fabric-based electrodes capable of providing intuitive biopotential monitoring. Progress has been made in this space, but they still face a critical design challenge in maintaining consistent skin contact, which directly impacts signal quality. Furthermore, there is a lack of an ISO-based testing framework to validate the electrode design and assess its ability to achieve enhanced performance, strength, usability, and durability. This study proposes the development and evaluation of an ISO-compliant testing framework for standard 2D and advanced 3D embroidered textrodes designs that have a unique structure in order to establish enhanced skin contact for the wearer. This testing framework leverages ISO standards: ISO 13934-1:2013 for tensile and zone-wise strength tests; ISO 13937-2 for tear tests; and ISO 6330 for washing, validating the textrode's performance, a necessity for wearables health parameter monitoring applications. Five textrodes (C1-C5) were designed using EPC win digitization software. Varying patterns such as running stitches, lock stitches, back-to-back stitches, and moss stitches were used to create various embroidered tetrodes samples using Madeira HC12 conductive thread with a resistivity of 100 ohm/m. The textrode designs were then fabricated using a ZSK technical embroidery machine. A comparative analysis was conducted based on a series of laboratory tests adhering to ISO compliance requirements. Tests focusing on the application of strain were applied to the textrodes, and these included: (1) analysis of the electrode's overall surface area strength; (2) assessment of the robustness of the textrodes boundaries; and (3) the assignment of fault test zones to each textrode, where vertical and horizontal slits of 3mm were applied to evaluate the performance of textrodes and its durability. Specific ISO-compliant tests linked to washing were conducted multiple times on each textrode sample to assess both mechanical and chemical damage. Additionally, abrasion and pilling tests were performed to evaluate mechanical damage on the surface of the textrodes and to compare it with the washing test. Finally, the textrodes were assessed based on morphological and surface resistance changes. Results demonstrate that textrode C4, featuring a 3-D layered structure consisting of foam, fabric, and conductive thread layers, significantly enhances skin-electrode contact for biopotential recording. The inclusion of a 3D foam layer was particularly effective in maintaining the shape of the electrode during strain tests, making it the top-performing textrode sample. Therefore, the layered 3D design structure of textrode C4 ranks highest when tested for durability, reusability, and washability. The ISO testing framework established in this study will support future research, validating the durability and reliability of textrodes for a wide range of applications.

Keywords: smart fabric, textrodes, testing framework, ISO compliant

Procedia PDF Downloads 52
473 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment

Authors: Lina Wu

Abstract:

The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.

Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification

Procedia PDF Downloads 34
472 Significant Influence of Land Use Type on Earthworm Communities but Not on Soil Microbial Respiration in Selected Soils of Hungary

Authors: Tsedekech Gebremeskel Weldmichael, Tamas Szegi, Lubangakene Denish, Ravi Kumar Gangwar, Erika Micheli, Barbara Simon

Abstract:

Following the 1992 Earth Summit in Rio de Janeiro, soil biodiversity has been recognized globally as a crucial player in guaranteeing the functioning of soil and a provider of several ecosystem services essential for human well-being. The microbial fraction of the soil is a vital component of soil fertility as soil microbes play key roles in soil aggregate formation, nutrient cycling, humification, and degradation of pollutants. Soil fauna, such as earthworms, have huge impacts on soil organic matter dynamics, nutrient cycling, and infiltration and distribution of water in the soil. Currently, land-use change has been a global concern as evidence accumulates that it adversely affects soil biodiversity and the associated ecosystem goods and services. In this study, we examined the patterns of soil microbial respiration (SMR) and earthworm (abundance, biomass, and species richness) across three land-use types (grassland, arable land, and forest) in Hungary. The objectives were i) to investigate whether there is a significant difference in SMR and earthworm (abundance, biomass, and species richness) among land-use types. ii) to determine the key soil properties that best predict the variation in SMR and earthworm communities. Soil samples, to a depth of 25 cm, were collected from the surrounding areas of seven soil profiles. For physicochemical parameters, soil organic matter (SOM), pH, CaCO₃, E₄/E₆, available nitrogen (NH₄⁺-N and NO₃⁻-N), potassium (K₂O), phosphorus (P₂O₅), exchangeable Ca²⁺, Mg²⁺, soil moisture content (MC) and bulk density were measured. The analysis of SMR was determined by basal respiration method, and the extraction of earthworms was carried out by hand sorting method as described by ISO guideline. The results showed that there was no statistically significant difference among land-use types in SMR (p > 0.05). However, the highest SMR was observed in grassland soils (11.77 mgCO₂ 50g⁻¹ soil 10 days⁻¹) and lowest in forest soils (8.61 mgCO₂ 50g⁻¹ soil 10 days⁻¹). SMR had strong positive correlations with exchangeable Ca²⁺ (r = 0.80), MC (r = 0.72), and exchangeable Mg²⁺(r = 0.69). We found a pronounced variation in SMR among soil texture classes (p < 0.001), where the highest value in silty clay loam soils and the lowest in sandy soils. This study provides evidence that agricultural activities can negatively influence earthworm communities, in which the arable land had significantly lower earthworm communities compared to forest and grassland respectively. Overall, in our study, land use type had minimal effects on SMR whereas, earthworm communities were profoundly influenced by land-use type particularly agricultural activities related to tillage. Exchangeable Ca²⁺, MC, and texture were found to be the key drivers of the variation in SMR.

Keywords: earthworm community, land use, soil biodiversity, soil microbial respiration, soil property

Procedia PDF Downloads 121
471 Impact of Helicobacter pylori Infection on Colorectal Adenoma-Colorectal Carcinoma Sequence

Authors: Jannis Kountouras, Nikolaos Kapetanakis, Stergios A. Polyzos, Apostolis Papaeftymiou, Panagiotis Katsinelos, Ioannis Venizelos, Christina Nikolaidou, Christos Zavos, Iordanis Romiopoulos, Elena Tsiaousi, Evangelos Kazakos, Michael Doulberis

Abstract:

Background & Aims: Helicobacter pylori infection (Hp-I) has been recognized as a substantial risk agent involved in gastrointestinal (GI) tract oncogenesis by stimulating cancer stem cells (CSCs), oncogenes, immune surveillance processes, and triggering GI microbiota dysbiosis. We aimed to investigate the possible involvement of active Hp-I in the sequence: chronic inflammation–adenoma–colorectal cancer (CRC) development. Methods: Four pillars were investigated: (i) endoscopic and conventional histological examinations of patients with CRC, colorectal adenomas (CRA) versus controls to detect the presence of active Hp-I; (ii) immunohistochemical determination of the presence of Hp; expression of CD44, an indicator of CSCs and/or bone marrow-derived stem cells (BMDSCs); expressions of oncogene Ki67 and anti-apoptotic Bcl-2 protein; (iii) expression of CD45, indicator of immune surveillance locally (assessing mainly T and B lymphocytes locally); and (iv) correlation of the studied parameters with the presence or absence of Hp-I. Results: Among 50 patients with CRC, 25 with CRA, and 10 controls, a significantly higher presence of Hp-I in the CRA (68%) and CRC group (84%) were found compared with controls (30%). The presence of Hp-I with accompanying immunohistochemical expression of CD44 in biopsy specimens was revealed in a high proportion of patients with CRA associated with moderate/severe dysplasia (88%) and CRC patients with moderate/severe degree of malignancy (91%). Comparable results were also obtained for Ki67, Bcl-2, and CD45 immunohistochemical expressions. Concluding Remarks: Hp-I seems to be involved in the sequence: CRA – dysplasia – CRC, similarly to the upper GI tract oncogenesis, by several pathways such as the following: Beyond Hp-I associated insulin resistance, the major underlying mechanism responsible for the metabolic syndrome (MetS) that increase the risk of colorectal neoplasms, as implied by other Hp-I related MetS pathologies, such as non-alcoholic fatty liver disease and upper GI cancer, the disturbance of the normal GI microbiota (i.e., dysbiosis) and the formation of an irritative biofilm could contribute to a perpetual inflammatory upper GIT and colon mucosal damage, stimulating CSCs or recruiting BMDSCs and affecting oncogenes and immune surveillance processes. Further large-scale relative studies with a pathophysiological perspective are necessary to demonstrate in-depth this relationship.

Keywords: Helicobacter pylori, colorectal cancer, colorectal adenomas, gastrointestinal oncogenesis

Procedia PDF Downloads 129
470 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 136
469 Oxidative Stability of Corn Oil Supplemented with Natural Antioxidants from Cypriot Salvia fruticosa Extracts

Authors: Zoi Konsoula

Abstract:

Vegetable oils, which are rich in polyunsaturated fatty acids, are susceptible to oxidative deterioration. The lipid oxidation of oils results in the production of rancid odors and unpleasant flavors as well as the reduction of their nutritional quality and safety. Traditionally, synthetic antioxidants are employed for their retardation or prevention of oxidative deterioration of oils. However, these compounds are suspected to pose health hazards. Consequently, recently there has been a growing interest in the use of natural antioxidants of plant origin for improving the oxidative stability of vegetable oils. The genus Salvia (sage) is well known for its antioxidant activity. In the Cypriot flora Salvia fruticosa is the most distributed indigenous Salvia species. In the present study, extracts were prepared from S. fruticosa aerial parts using various solvents and their antioxidant activity was evaluated by the 1,1-diphenyl-2-picrylhydrazine (DPPH) radical scavenging and Ferric Reducing Antioxidant Power (FRAP) method. Moreover, the antioxidant efficacy of all extracts was assessed using corn oil as the oxidation substrate, which was subjected to accelerated aging (60 °C, 30 days). The progress of lipid oxidation was monitored by the determination of the peroxide, p-aniside, conjugated dienes and trienes value according to the official AOCS methods. Synthetic antioxidants (butylated hydroxytoluene-BHT and butylated hydroxyanisole-BHA) were employed at their legal limit (200 ppm) as reference. Finally, the total phenolic (TPC) and flavonoid content (TFC) of the prepared extracts was measured by the Folin-Ciocalteu and aluminum-flavonoid complex method, respectively. The results of the present study revealed that although all sage extracts prepared from S. fruticosa exhibited antioxidant activity, the highest antioxidant capacity was recorded in the methanolic extract, followed by the non-toxic, food grade ethanol. Furthermore, a positive correlation between the antioxidant potency and the TPC of extracts was observed in all cases. Interestingly, sage extracts prevented lipid oxidation in corn oil at all concentrations tested, however, the magnitude of stabilization was dose dependent. More specifically, results from the different oxidation parameters were in agreement with each other and indicated that the protection offered by the various extracts depended on their TPC. Among the extracts, the methanolic extract was more potent in inhibiting oxidative deterioration. Finally, both methanolic and ethanolic sage extracts at a concentration of 1000 ppm exerted a stabilizing effect comparable to that of the reference synthetic antioxidants. Based on the results of the present study, sage extracts could be used for minimizing or preventing lipid oxidation in oils and, thus, prolonging their shelf-life. In particular, given that the use of dietary alcohol, such as ethanol, is preferable than methanol in food applications, the ethanolic extract prepared from S. fruticosa could be used as an alternative natural antioxidant.

Keywords: antioxidant activity, corn oil, oxidative deterioration, sage

Procedia PDF Downloads 180
468 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks

Authors: Andrew D. Henshaw, James M. Austin

Abstract:

Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.

Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money

Procedia PDF Downloads 74
467 Prediction of Outcome after Endovascular Thrombectomy for Anterior and Posterior Ischemic Stroke: ASPECTS on CT

Authors: Angela T. H. Kwan, Wenjun Liang, Jack Wellington, Mohammad Mofatteh, Thanh N. Nguyen, Pingzhong Fu, Juanmei Chen, Zile Yan, Weijuan Wu, Yongting Zhou, Shuiquan Yang, Sijie Zhou, Yimin Chen

Abstract:

Background: Endovascular Therapy (EVT)—in the form of mechanical thrombectomy—following intravenous thrombolysis is the standard gold treatment for patients with acute ischemic stroke (AIS) due to large vessel occlusion (LVO). It is well established that an ASPECTS ≥ 7 is associated with an increased likelihood of positive post-EVT outcomes, as compared to an ASPECTS < 7. There is also prognostic utility in coupling posterior circulation ASPECTS (pc-ASPECTS) with magnetic resonance imaging for evaluating the post-EVT functional outcome. However, the value of pc-ASPECTS applied to CT must be explored further to determine its usefulness in predicting functional outcomes following EVT. Objective: In this study, we aimed to determine whether pc-ASPECTS on CT can predict post-EVT functional outcomes among patients with AIS due to LVO. Methods: A total of 247 consecutive patients aged 18 and over receiving EVT for LVO-related AIS were recruited into a prospective database. The data were retrospectively analyzed between March 2019 to February 2022 from two comprehensive tertiary care stroke centers: Foshan Sanshui District People’s Hospital and First People's Hospital of Foshan in China. Patient parameters included EVT within 24hrs of symptom onset, premorbid modified Rankin Scale (mRS) ≤ 2, presence of distal and terminal cerebral blood vessel occlusion, and subsequent 24–72-hour post-stroke onset CT scan. Univariate comparisons were performed using the Fisher exact test or χ2 test for categorical variables and the Mann–Whitney U test for continuous variables. A p-value of ≤ 0.05 was statistically significant. Results: A total of 247 patients met the inclusion criteria; however, 3 were excluded due to the absence of post-CTs and 8 for pre-EVT ASPECTS < 7. Overall, 236 individuals were examined: 196 anterior circulation ischemic strokes and 40 posterior strokes of basilar artery occlusion. We found that both baseline post- and pc-ASPECTS ≥ 7 serve as strong positive markers of favorable outcomes at 90 days post-EVT. Moreover, lower rates of inpatient mortality/hospice discharge, 90-day mortality, and 90-day poor outcome were observed. Moreover, patients in the post-ASPECTS ≥ 7 anterior circulation group had shorter door-to-recanalization time (DRT), puncture-to-recanalization time (PRT), and last known normal-to-puncture-time (LKNPT). Conclusion: Patients of anterior and posterior circulation ischemic strokes with baseline post- and pc-ASPECTS ≥ 7 may benefit from EVT.

Keywords: endovascular therapy, thrombectomy, large vessel occlusion, cerebral ischemic stroke, ASPECTS

Procedia PDF Downloads 90
466 The Cooperation among Insulin, Cortisol and Thyroid Hormones in Morbid Obese Children and Metabolic Syndrome

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity, a disease associated with a low-grade inflammation, is a risk factor for the development of metabolic syndrome (MetS). So far, MetS risk factors such as parameters related to glucose and lipid metabolisms as well as blood pressure were considered for the evaluation of this disease. There are still some ambiguities related to the characteristic features of MetS observed particularly in pediatric population. Hormonal imbalance is also important, and quite a lot information exists about the behaviour of some hormones in adults. However, the hormonal profiles in pediatric metabolism have not been cleared yet. The aim of this study is to investigate the profiles of cortisol, insulin, and thyroid hormones in children with MetS. The study population was composed of morbid obese (MO) children without (Group 1) and with (Group 2) MetS components. WHO BMI-for age and sex percentiles were used for the classification of obesity. The values above 99 percentile were defined as morbid obesity. Components of MetS (central obesity, glucose intolerance, high blood pressure, high triacylglycerol levels, low levels of high density lipoprotein cholesterol) were determined. Anthropometric measurements were performed. Ratios as well as obesity indices were calculated. Insulin, cortisol, thyroid stimulating hormone (TSH), free T3 and free T4 analyses were performed by electrochemiluminescence immunoassay. Data were evaluated by statistical package for social sciences program. p<0.05 was accepted as the degree for statistical significance. The mean ages±SD values of Group 1 and Group 2 were 9.9±3.1 years and 10.8±3.2 years, respectively. Body mass index (BMI) values were calculated as 27.4±5.9 kg/m2 and 30.6±8.1 kg/m2, successively. There were no statistically significant differences between the ages and BMI values of the groups. Insulin levels were statistically significantly increased in MetS in comparison with the levels measured in MO children. There was not any difference between MO children and those with MetS in terms of cortisol, T3, T4 and TSH. However, T4 levels were positively correlated with cortisol and negatively correlated with insulin. None of these correlations were observed in MO children. Cortisol levels in both MO as well as MetS group were significantly correlated. Cortisol, insulin, and thyroid hormones are essential for life. Cortisol, called the control system for hormones, orchestrates the performance of other key hormones. It seems to establish a connection between hormone imbalance and inflammation. During an inflammatory state, more cortisol is produced to fight inflammation. High cortisol levels prevent the conversion of the inactive form of the thyroid hormone T4 into active form T3. Insulin is reduced due to low thyroid hormone. T3, which is essential for blood sugar control- requires cortisol levels within the normal range. Positive association of T4 with cortisol and negative association of it with insulin are the indicators of such a delicate balance among these hormones also in children with MetS.

Keywords: children, cortisol, insulin, metabolic syndrome, thyroid hormones

Procedia PDF Downloads 133
465 Traditional Wisdom of Indigenous Vernacular Architecture as Tool for Climate Resilience Among PVTG Indigenous Communities in Jharkhand, India

Authors: Ankush, Harshit Sosan Lakra, Rachita Kuthial

Abstract:

Climate change poses significant challenges to vulnerable communities, particularly indigenous populations in ecologically sensitive regions. Jharkhand, located in the heart of India, is home to several indigenous communities, including the Particularly Vulnerable Tribal Groups (PVTGs). The Indigenous architecture of the region functions as a significant reservoir of climate adaptation wisdom. It explores the architectural analysis encompassing the construction materials, construction techniques, design principles, climate responsiveness, cultural relevance, adaptation, integration with the environment and traditional wisdom that has evolved through generations, rooted in cultural and socioeconomic traditions, and has allowed these communities to thrive in a variety of climatic zones, including hot and dry, humid, and hilly terrains to withstand the test of time. Despite their historical resilience to adverse climatic conditions, PVTG tribal communities face new and amplified challenges due to the accelerating pace of climate change. There is a significant research void that exists in assimilating their traditional practices and local wisdom into contemporary climate resilience initiatives. Most of the studies place emphasis on technologically advanced solutions, often ignoring the invaluable Indigenous Local knowledge that can complement and enhance these efforts. This research gap highlights the need to bridge the disconnect between indigenous knowledge and contemporary climate adaptation strategies. The study aims to explore and leverage indigenous knowledge of vernacular architecture as a strategic tool for enhancing climatic resilience among PVTGs of the region. The first objective is to understand the traditional wisdom of vernacular architecture by analyzing and documenting distinct architectural practices and cultural significance of PVTG communities, emphasizing construction techniques, materials and spatial planning. The second objective is to develop culturally sensitive climatic resilience strategies based on findings of vernacular architecture by employing a multidisciplinary research approach that encompasses ethnographic fieldwork climate data assessment considering multiple variables such as temperature variations, precipitation patterns, extreme weather events and climate change reports. This will be a tailor-made solution integrating indigenous knowledge with modern technology and sustainable practices. With the involvement of indigenous communities in the process, the research aims to ensure that the developed strategies are practical, culturally appropriate, and accepted. To foster long-term resilience against the global issue of climate change, we can bridge the gap between present needs and future aspirations with Traditional wisdom, offering sustainable solutions that will empower PVTG communities. Moreover, the study emphasizes the significance of preserving and reviving traditional Architectural wisdom for enhancing climatic resilience. It also highlights the need for cooperative endeavors of communities, stakeholders, policymakers, and researchers to encourage integrating traditional Knowledge into Modern sustainable design methods. Through these efforts, this research will contribute not only to the well-being of PVTG communities but also to the broader global effort to build a more resilient and sustainable future. Also, the Indigenous communities like PVTG in the state of Jharkhand can achieve climatic resilience while respecting and safeguarding the cultural heritage and peculiar characteristics of its native population.

Keywords: vernacular architecture, climate change, resilience, PVTGs, Jharkhand, indigenous people, India

Procedia PDF Downloads 61
464 Development and Characterization of Topical 5-Fluorouracil Solid Lipid Nanoparticles for the Effective Treatment of Non-Melanoma Skin Cancer

Authors: Sudhir Kumar, V. R. Sinha

Abstract:

Background: The topical and systemic toxicity associated with present nonmelanoma skin cancer (NMSC) treatment therapy using 5-Fluorouracil (5-FU) make it necessary to develop a novel delivery system having lesser toxicity and better control over drug release. Solid lipid nanoparticles offer many advantages like: controlled and localized release of entrapped actives, nontoxicity, and better tolerance. Aim:-To investigate safety and efficacy of 5-FU loaded solid lipid nanoparticles as a topical delivery system for the treatment of nonmelanoma skin cancer. Method: Topical solid lipid nanoparticles of 5-FU were prepared using Compritol 888 ATO (Glyceryl behenate) as lipid component and pluronic F68 (Poloxamer 188), Tween 80 (Polysorbate 80), Tyloxapol (4-(1,1,3,3-Tetramethylbutyl) phenol polymer with formaldehyde and oxirane) as surfactants. The SLNs were prepared with emulsification method. Different formulation parameters viz. type and ratio of surfactant, ratio of lipid and ratio of surfactant:lipid were investigated on particle size and drug entrapment efficiency. Results: Characterization of SLNs like–Transmission Electron Microscopy (TEM), Differential Scannig calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), Particle size determination, Polydispersity index, Entrapment efficiency, Drug loading, ex vivo skin permeation and skin retention studies, skin irritation and histopathology studies were performed. TEM results showed that shape of SLNs was spherical with size range 200-500nm. Higher encapsulation efficiency was obtained for batches having higher concentration of surfactant and lipid. It was found maximum 64.3% for SLN-6 batch with size of 400.1±9.22 nm and PDI 0.221±0.031. Optimized SLN batches and marketed 5-FU cream were compared for flux across rat skin and skin drug retention. The lesser flux and higher skin retention was obtained for SLN formulation in comparison to topical 5-FU cream, which ensures less systemic toxicity and better control of drug release across skin. Chronic skin irritation studies lacks serious erythema or inflammation and histopathology studies showed no significant change in physiology of epidermal layers of rat skin. So, these studies suggest that the optimized SLN formulation is efficient then marketed cream and safer for long term NMSC treatment regimens. Conclusion: Topical and systemic toxicity associated with long-term use of 5-FU, in the treatment of NMSC, can be minimized with its controlled release with significant drug retention with minimal flux across skin. The study may provide a better alternate for effective NMSC treatment.

Keywords: 5-FU, topical formulation, solid lipid nanoparticles, non melanoma skin cancer

Procedia PDF Downloads 497
463 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification

Authors: Chung-Ming Lo, Chung-Chien Lee

Abstract:

In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.

Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis

Procedia PDF Downloads 270
462 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations

Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai

Abstract:

Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.

Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile

Procedia PDF Downloads 129
461 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 225
460 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 140
459 Profitability and Productivity Performance of the Selected Public Sector Banks in India

Authors: Sudipto Jana

Abstract:

Background and significance of the study: Banking industry performs as a catalyst for industrial growth and agricultural growth, however, as well involves the existence and welfare of the citizens. The banking system in India was described by unmatched growth and the recreation of bunch making in the pre-liberalization era. At the time of financial sector reforms Reserve Bank of India issued a regulatory norm concerning capital adequacy, income recognition, asset classification and provisioning that have increasingly precede meeting by means of the international paramount performs. Bank management ceaselessly manages the triumph, effectiveness, productivity and performance of the bank as good performance, high productivity and efficiency authorizes the triumph of the bank management targets as well as aims of bank. In a comparable move toward performance of any economy depends upon the expediency and effectiveness of its financial system of nation establishes its economic growth indicators. Profitability and productivity are the most important relevant parameters of any banking group. Keeping in view of this, this study examines the profitability and productivity performance of the selected public sector banks in India. Methodology: This study is based on secondary data obtained from Reserve Bank of India database for the periods between 2006 and 2015. This study purposively selects four types of commercial banks, namely, State Bank of India, United Bank of India, Punjab National Bank and Allahabad Bank. In order to analyze the performance with relation to profitability and productivity, productivity performance indicators in terms of capital adequacy ratio, burden ratio, business per employee, spread per employee and advances per employee and profitability performance indicators in terms of return on assets, return on equity, return on advances and return on branch have been considered. In the course of analysis, descriptive statistics, correlation statistics and multiple regression have been used. Major findings: Descriptive statistics indicate that productivity performance of State Bank of India is very satisfactory than other public sector banks in India. But management of productivity is unsatisfactory in case of all the public sector banks under study. Correlation statistics point out that profitability of the public sector banks are strongly positively related with productivity performance in case of all the public sector banks under study. Multiple regression test results show that when profitability increases profit per employee increases and net non-performing assets decreases. Concluding statements: Productivity and profitability performance of United Bank of India, Allahabad Bank and Punjab National Bank are unsatisfactory due to poor management of asset quality as well as management efficiency. It needs government’s interference so that profitability and productivity performance are increased in the near future.

Keywords: India, productivity, profitability, public sector banks

Procedia PDF Downloads 409
458 Qualitative Analysis of Occupant’s Satisfaction in Green Buildings

Authors: S. Srinivas Rao, Pallavi Chitnis, Himanshu Prajapati

Abstract:

The green building movement in India commenced in 2003. Since then, more than 4,300 projects have adopted green building concepts. For last 15 years, the green building movement has grown strong across the country and has resulted in immense tangible and intangible benefits to the stakeholders. Several success stories have demonstrated the tangible benefit experienced in green buildings. However, extensive data interpretation and qualitative analysis are required to report the intangible benefits in green buildings. The emphasis is now shifting to the concept of people-centric design and productivity, health and wellbeing of occupants are gaining importance. This research was part of World Green Building Council’s initiative on 'Better Places for People' which aims to create a world where buildings support healthier and happier lives. The overarching objective of this study was to understand the perception of users living and working in green buildings. The study was conducted in twenty-five IGBC certified green buildings across India, and a comprehensive questionnaire was designed to capture occupant’s perception and experience in the built environment. The entire research focussed on the eight attributes of healthy buildings. The factors considered for the study include thermal comfort, visual comfort, acoustic comfort, ergonomics, greenery, fitness, green transit and sanitation and hygiene. The occupant’s perception and experience were analysed to understand their satisfaction level. The macro level findings of the study indicate that green buildings have addressed attributes of healthy buildings to a larger extent. Few important findings of the study focussed on the parameters such as visual comfort, fitness, greenery, etc. The study indicated that occupants give tremendous importance to the attributes such as visual comfort, daylight, fitness, greenery, etc. 89% occupants were comfortable with the visual environment, on account of various lighting element incorporated as part of the design. Tremendous importance to fitness related activities is highlighted by the study. 84% occupants had actively utilised sports and meditation facilities provided in their facility. Further, 88% occupants had access to the ample greenery and felt connected to the natural biodiversity. This study aims to focus on the immense advantages gained by users occupying green buildings. This will empower green building movement to achieve new avenues to design and construct healthy buildings. The study will also support towards implementing human-centric measures and in turn, will go a long way in addressing people welfare and wellbeing in the built environment.

Keywords: health and wellbeing, green buildings, Indian green building council, occupant’s satisfaction

Procedia PDF Downloads 169
457 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory

Authors: Belcher Fulele, W. A. A. Ddamba

Abstract:

Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.

Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model

Procedia PDF Downloads 339
456 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 80
455 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems

Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana

Abstract:

Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.

Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP

Procedia PDF Downloads 175
454 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 161
453 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production

Authors: Mohammad Emamjome Kashan, Alan S. Fung

Abstract:

In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.

Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector

Procedia PDF Downloads 108
452 Development of Polylactic Acid Insert with a Cinnamaldehyde-Betacyclodextrin Complex for Cape Gooseberry (Physalis Peruviana L.) Packed

Authors: Gómez S. Jennifer, Méndez V. Camila, Moncayo M. Diana, Vega M. Lizeth

Abstract:

The cape gooseberry is a climacteric fruit; Colombia is one of the principal exporters in the world. The environmental condition of temperature and relative moisture decreases the titratable acidity and pH. These conditions and fruit maturation result in the fungal proliferation of Botrytis cinerea disease. Plastic packaging for fresh cape gooseberries was used for mechanical damage protection but created a suitable atmosphere for fungal growth. Beta-cyclodextrins are currently implemented as coatings for the encapsulation of hydrophobic compounds, for example, with bioactive compounds from essential oils such as cinnamaldehyde, which has a high antimicrobial capacity. However, it is a volatile substance. In this article, the casting method was used to obtain a polylactic acid (PLA) polymer film containing the beta-cyclodextrin-cinnamaldehyde inclusion complex, generating an insert that allowed the controlled release of the antifungal substance in packed cape gooseberries to decrease contamination by Botrytis cinerea in a latent state during storage. For the encapsulation technique, three ratios for the cinnamaldehyde: beta-cyclodextrin inclusion complex were proposed: (25:75), (40:60), and (50:50). Spectrophotometry, colorimetry in L*a*b* coordinate space and scanning electron microscopy (SEM) were made for the complex characterization. Subsequently, two ratios of tween and water (40:60) and (50:50) were used to obtain the polylactic acid (PLA) film. To determine mechanical and physical parameters of colourimetry in L*a*b* coordinate space, atomic force microscopy and stereoscopy were done to determine the transparency and flexibility of the film; for both cases, Statgraphics software was used to determine the best ratio in each of the proposed phases, where for encapsulation it was (50:50) with an encapsulation efficiency of 65,92%, and for casting the ratio (40:60) obtained greater transparency and flexibility that permitted its incorporation into the polymeric packaging. A liberation assay was also developed under ambient temperature conditions to evaluate the concentration of cinnamaldehyde inside the packaging through gas chromatography for three weeks. It was found that the insert had a controlled release. Nevertheless, a higher cinnamaldehyde concentration is needed to obtain the minimum inhibitory concentration for the fungus Botrytis cinerea (0.2g/L). The homogeneity of the cinnamaldehyde gas phase inside the packaging can be improved by considering other insert configurations. This development aims to impact emerging food preservation technologies with the controlled release of antifungals to reduce the affectation of the physico-chemical and sensory properties of the fruit as a result of contamination by microorganisms in the postharvest stage.

Keywords: antifungal, casting, encapsulation, postharvest

Procedia PDF Downloads 59
451 Electron Bernstein Wave Heating in the Toroidally Magnetized System

Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten

Abstract:

The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.

Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS

Procedia PDF Downloads 76
450 Outcome of Naive SGLT2 Inhibitors Among ICU Admitted Acute Stroke with T2DM Patients a Prospective Cohort Study in NCMultispecialty Hospital, Biratnagar, Nepal

Authors: Birendra Kumar Bista, Rhitik Bista, Prafulla Koirala, Lokendra Mandal, Nikrsh Raj Shrestha, Vivek Kattel

Abstract:

Introduction: Poorly controlled diabetes is associated with cause and poor outcome of stroke. High blood sugar reduces cerebral blood flow, increases intracranial pressure, cerebral edema and neuronal death, especially among patients with poorly controlled diabetes.1 SGLT2 inhibitors are associated with 50% reduction in hemorrhagic stroke compared with placebo. SGLT2 inhibitors decrease cardiovascular events via reducing glucose, blood pressure, weight, arteriosclerosis, albuminuria and reduction of atrial fibrillation.2,3 No study has been documented in low income countries to see the role of post stroke SGLT2 inhibitors on diabetic patients at and after ICU admission. Aims: The aim of the study was to measure the 12 months outcome of diabetic patients with acute stroke admitted in ICU set up with naïve SGLT2 inhibitors add on therapy. Method: It was prospective cohort study carried out in a 250 bedded tertiary neurology care hospital at the province capital Biratnagar Nepal. Diabetic patient with acute stroke admitted in ICU from 1st January 2022 to 31st December 2022 who were not under SGLT2 inhibitors were included in the study. These patients were managed as per hospital protocol. Empagliflozin was added to the alternate enrolled patients. Empagliflozin was continued at the time of discharged and during follow up unless contraindicated. These patients were followed up for 12 months. Outcome measured were mortality, morbidity requiring readmission or hospital visit other than regular follow up, SGLT2 inhibitors related adverse events, neuropsychiatry comorbidity, functional status and biochemical parameters. Ethical permission was taken from hospital administration and ethical board. Results: Among 147 diabetic cases 68 were not treated with empagliflozin whereas 67 cases were started the SGLT2 inhibitors. HbA1c level and one year mortality was significantly low among patients on empaglifozin arm. Over a period of 12 months 427 acute stroke patients were admitted in the ICU. Out of them 44% were female, 61% hypertensive, 34% diabetic, 57% dyslipidemia, 26% smoker and with median age of 45 years. Among 427 cases 4% required neurosurgical interventions and 76% had hemorrhagic CVA. The most common reason for ICU admission was GCS<8 (51%). The median ICU stay was 5 days. ICU mortality was 21% whereas 1 year mortality was 41% with most common reason being pneumonia. Empaglifozin related adverse effect was seen in 11% most commonly lower urinary tract infection in 6%. Conclusion: Empagliflozin can safely be started among acute stroke with better Hba1C control and low mortality outcome compared to treatment without SGLT2 inhibitor.

Keywords: diabetes, ICU, mortality, SGLT2 inhibitors, stroke

Procedia PDF Downloads 44
449 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive

Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu

Abstract:

Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.

Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings

Procedia PDF Downloads 111
448 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 228
447 Stability Analysis of Hossack Suspension Systems in High Performance Motorcycles

Authors: Ciro Moreno-Ramirez, Maria Tomas-Rodriguez, Simos A. Evangelou

Abstract:

A motorcycle's front end links the front wheel to the motorcycle's chassis and has two main functions: the front wheel suspension and the vehicle steering. Up to this date, several suspension systems have been developed in order to achieve the best possible front end behavior, being the telescopic fork the most common one and already subjected to several years of study in terms of its kinematics, dynamics, stability and control. A motorcycle telescopic fork suspension model consists of a couple of outer tubes which contain the suspension components (coil springs and dampers) internally and two inner tubes which slide into the outer ones allowing the suspension travel. The outer tubes are attached to the frame through two triple trees which connect the front end to the main frame through the steering bearings and allow the front wheel to turn about the steering axis. This system keeps the front wheel's displacement in a straight line parallel to the steering axis. However, there exist alternative suspension designs that allow different trajectories of the front wheel with the suspension travel. In this contribution, the authors investigate an alternative front suspension system (Hossack suspension) and its influence on the motorcycle nonlinear dynamics to identify and reduce stability risks that a new suspension systems may introduce in the motorcycle dynamics. Based on an existing high-fidelity motorcycle mathematical model, the front end geometry is modified to accommodate a Hossack suspension system. It is characterized by a double wishbone design that varies the front end geometry on certain maneuverings and, consequently, the machine's behavior/response. It consists of a double wishbone structure directly attached to the chassis. In here, the kinematics of this system and its impact on the motorcycle performance/stability are analyzed and compared to the well known telescopic fork suspension system. The framework of this research is the mathematical modelling and numerical simulation. Full stability analyses are performed in order to understand how the motorcycle dynamics may be affected by the newly introduced front end design. This study is carried out by a combination of nonlinear dynamical simulation and root-loci methods. A modal analysis is performed in order to get a deeper understanding of the different modes of oscillation and how the Hossack suspension system affects them. The results show that different kinematic designs of a double wishbone suspension systems do not modify the general motorcycle's stability. The normal modes properties remain unaffected by the new geometrical configurations. However, these normal modes differ from one suspension system to the other. It is seen that the normal modes behaviour depends on various important dynamic parameters, such as the front frame flexibility, the steering damping coefficient and the centre of mass location.

Keywords: nonlinear mechanical systems, motorcycle dynamics, suspension systems, stability

Procedia PDF Downloads 209
446 Genetic Diversity of Cord Blood of the National Center of Blood Transfusion, Mexico (NCBT)

Authors: J. Manuel Bello-López, Julieta Rojo-Medina

Abstract:

Introduction: The transplant of Umbilical Cord Blood Units (UCBU) are a therapeutic possibility for patients with oncohaematological disorders, especially in children. In Mexico, 48.5% of oncological diseases in children 1-4 years old are leukemias; whereas in patients 5-14 and 15-24 years old, lymphomas and leukemias represent the second and third cause of death in these groups respectively. Therefore it is necessary to have more registries of UCBU in order to ensure genetic diversity in the country; the above because the search for appropriate a UCBU is increasingly difficult for patients of mixed ethnicity. Objective: To estimate the genetic diversity (polymorphisms) of Human Leucocyte Antigen (HLA) Class I (A, B) and Class II (DRB1) in UCBU cryopreserved for transplant at Cord Blood Bank of the NCBT. Material and Methods: HLA typing of 533 UCBU for transplant was performed from 2003-2012 at the Histocompatibility Laboratory from the Research Department (evaluated by Los Angeles Ca. Immunogenetics Center) of the NCBT. Class I HLA-A, HLA-B and Class II HLA-DRB1 typing was performed using medium resolution Sequence-Specific Primer (SSP). In cases of an ambiguity detected by SSP; Sequence-Specific Oligonucleotide (SSO) method was carried out. A strict analysis of populations genetic parameters were done in 5 representative UCBU populations. Results: 46.5% of UCBU were collected from Mexico City, State of Mexico (30.95%), Puebla (8.06%), Morelos (6.37%) and Veracruz (3.37%). The remaining UCBU (4.75%) are represented by other states. The identified genotypes correspond to Amerindian origins (HLA-A*02, 31; HLA-B*39, 15, 48), Caucasian (HLA-A*02, 68, 01, 30, 31; HLA-B*35, 15, 40, 44, 07 y HLA-DRB1*04, 08, 07, 15, 03, 14), Oriental (HLA-A*02, 30, 01, 31; HLA-B* 35, 39, 15, 40, 44, 07,48 y HLA-DRB1*04, 07,15, 03) and African (HLA-A*30 y HLA-DRB1*03). The genetic distances obtained by Cavalli-Sforza analysis of the five states showed significant genetic differences by comparing genetic frequencies. The shortest genetic distance exists between Mexico City and the state of Puebla (0.0039) and the largest between Veracruz and Morelos (0.0084). In order to identify significant differences between this states, the ANOVA test was performed. This demonstrates that UCBU is significantly different according to their origin (P <0.05). This is shown by the divergence between arms at the Dendogram of Neighbor-Joining. Conclusions: The NCBT provides UCBU in patients with oncohaematological disorders in all the country. There is a group of patients for which not compatible UCBU can be find due to the mixed ethnic origin. For example, the population of northern Mexico is mostly Caucasian. Most of the NCBT donors are of various ethnic origins, predominantly Amerindians and Caucasians; although some ethnic minorities like Oriental, African and pure Indian ethnics are not represented. The NCBT is, therefore, establishing agreements with different states of Mexico to promote the altruistic donation of Umbilical Cord Blood in order to enrich the genetic diversity in its files.

Keywords: cord blood, genetic diversity, human leucocyte antigen, transplant

Procedia PDF Downloads 369