Search results for: breathing rate (BR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8201

Search results for: breathing rate (BR)

461 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction

Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl

Abstract:

Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.

Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR

Procedia PDF Downloads 292
460 The Incidence of Inferior Alveolar Nerve Dysfunction Following Bilateral Sagittal Split Osteotomies: A Single Centre Retrospective Audit in the United Kingdom

Authors: Krupali Mukeshkumar, Jinesh Shah

Abstract:

Background: Bilateral Sagittal Split Osteotomy (BSSO), used for the correction of mandibular deformities, is a common oral and maxillofacial surgical procedure. Inferior alveolar nerve dysfunction is commonly reported post-operatively by patients as paresthesia or anesthesia. The current literature lacks a consensus on the incidence of inferior alveolar nerve dysfunction as patients are not routinely assessed pre and post-operatively with an objective assessment. The range of incidence varies from 9% to 85% of patients, with some authors arguing that 100% of patients experience nerve dysfunction immediately post-surgery. Systematic reviews have shown a difference between incidence rates at different follow-up periods using objective and subjective methods. Aim: To identify the incidence of inferior alveolar nerve dysfunction following BSSO. Gold standard: Nerve dysfunction incidence rates similar or lower than current literature of 83% day one post-operatively and 18.4% at one year follow up. Setting: A retrospective cross-sectional audit of patients treated between 2017-2019 at the Royal Stoke University Hospital, Maxillofacial and Orthodontic departments. Sample: All patients who underwent a BSSO (with or without le fort one osteotomy) between 2017–2019 were identified from the database. Patients with pre-existing neurosensory disturbance, those who had a genioplasty at the same time and those with no follow-up were excluded. The sample consisted of 121 patients, 37 males and 84 females between the ages of 17-50 years at the time of surgery. Methods: Clinical records of 121 cases were reviewed to assess the age, sex, type of mandibular osteotomy, status of the nerve during the surgical procedure, type of bony split and incidence of nerve dysfunction at follow-up appointments. The surgical procedure was carried out by three Maxillo-facial surgeons and follow-up appointments were carried out in the Orthodontic and Oral and Maxillo-facial departments. Results: 120 patients were treated to correct the mandibular facial deformity and 1 patient was treated for sleep apnoea. Seventeen patients had a mandibular setback and 104 patients had mandibular advancement. 68 patients reported inferior alveolar nerve dysfunction at one week following their surgery. Seventy-six patients had temporary paresthesia present between 2 weeks and 12 months post-surgery. 13 patients had persistent nerve dysfunction at 12 months, of which 1 had a bad bony split during the BSSO. The incidence of nerve dysfunction postoperatively was 6.6% after 1 day, 56.1% at 1 week, 62.8% at 2 weeks, 59.5% between 3-6 weeks, 43.0% between 8-16 weeks and 10.7% at 1 year. Conclusions: The results of this audit show a similar incidence rate to the research gold standard at the one-year follow-up. Future Recommendations: No changes to surgical procedure or technique are indicated, but a need for improved documentation and a standardized approach for assessment of post-operative nerve dysfunction would be beneficial.

Keywords: bilateral sagittal split osteotomy, inferior alveolar nerve, mandible, nerve dysfunction

Procedia PDF Downloads 240
459 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 281
458 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic

Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak

Abstract:

Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.

Keywords: business value, financial ratios, performance measurement, value drivers

Procedia PDF Downloads 224
457 An Investigation of Wind Loading Effects on the Design of Elevated Steel Tanks with Lattice Tower Supporting Structures

Authors: J. van Vuuren, D. J. van Vuuren, R. Muigai

Abstract:

In recent times, South Africa has experienced extensive droughts that created the need for reliable small water reservoirs. These reservoirs have comparatively quick fabrication and installation times compared to market alternatives. An elevated water tank has inherent potential energy, resulting in that no additional water pumps are required to sustain water pressure at the outlet point – thus ensuring that, without electricity, a water source is available. The initial construction formwork and the complex geometric shape of concrete towers that requires casting can become time-consuming, rendering steel towers preferable. Reinforced concrete foundations, cast in advance, are required to be of sufficient strength. Thereafter, the prefabricated steel supporting structure and tank, which consist of steel panels, can be assembled and erected on site within a couple of days. Due to the time effectiveness of this system, it has become a popular solution to aid drought-stricken areas. These sites are normally in rural, schools or farmland areas. As these tanks can contain up to 2000kL (approximately 19.62MN) of water, combined with supporting lattice steel structures ranging between 5m and 30m in height, failure of one of the supporting members will result in system failure. Thus, there is a need to gain a comprehensive understanding of the operation conditions because of wind loadings on both the tank and the supporting structure. The aim of the research is to investigate the relationship between the theoretical wind loading on a lattice steel tower in combination with an elevated sectional steel tank, and the current wind loading codes, as applicable to South Africa. The research compares the respective design parameters (both theoretical and wind loading codes) whereby FEA analyses are conducted on the various design solutions. The currently available wind loading codes are not sufficient to design slender cantilever latticed steel towers that support elevated water storage tanks. Numerous factors in the design codes are not comprehensively considered when designing the system as these codes are dependent on various assumptions. Factors that require investigation for the study are; the wind loading angle to the face of the structure that will result in maximum load; the internal structural effects on models with different bracing patterns; the loading influence of the aspect ratio of the tank; and the clearance height of the tank on the structural members. Wind loads, as the variable that results in the highest failure rate of cantilevered lattice steel tower structures, require greater understanding. This study aims to contribute towards the design process of elevated steel tanks with lattice tower supporting structures.

Keywords: aspect ratio, bracing patterns, clearance height, elevated steel tanks, lattice steel tower, wind loads

Procedia PDF Downloads 151
456 The Charge Exchange and Mixture Formation Model in the ASz-62IR Radial Aircraft Engine

Authors: Pawel Magryta, Tytus Tulwin, Paweł Karpiński

Abstract:

The ASz62IR engine is a radial aircraft engine with 9 cylinders. This object is produced by the Polish company WSK "PZL-KALISZ" S.A. This is engine is currently being developed by the above company and Lublin University of Technology. In order to provide an effective work of the technological development of this unit it was decided to made the simulation model. The model of ASz-62IR was developed with AVL BOOST software which is a tool dedicated to the one-dimensional modeling of internal combustion engines. This model can be used to calculate parameters of an air and fuel flow in an intake system including charging devices as well as combustion and exhaust flow to the environment. The main purpose of this model is the analysis of the charge exchange and mixture formation in this engine. For this purpose, the model consists of elements such: as air inlet, throttle system, compressor connector, charging compressor, inlet pipes and injectors, outlet pipes, fuel injection and model of fuel mixing and evaporation. The model of charge exchange and mixture formation was based on the model of mass flow rate in intake and exhaust pipes, and also on the calculation of gas properties values like gas constant or thermal capacity. This model was based on the equations to describe isentropic flow. The energy equation to describe flow under steady conditions was transformed into the mass flow equation. In the model the flow coefficient μσ was used, that varies with the stroke/valve opening and was determined in a steady flow state. The geometry of the inlet channels and other key components was mapped with reference to the technical documentation of the engine and empirical measurements of the structure elements. The volume of elements on the charge flow path between the air inlet and the exhaust outlet was measured by the CAD mapping of the structure. Taken from the technical documentation, the original characteristics of the compressor engine was entered into the model. Additionally, the model uses a general model for the transport of chemical compounds of the mixture. There are 7 compounds used, i.e. fuel, O2, N2, CO2, H2O, CO, H2. A gasoline fuel of a calorific value of 43.5 MJ/kg and an air mass fraction for stoichiometric mixture of 14.5 were used. Indirect injection into the intake manifold is used in this model. The model assumes the following simplifications: the mixture is homogenous at the beginning of combustion, accordingly, mixture stoichiometric coefficient A/F remains constant during combustion, combusted and non-combusted charges show identical pressures and temperatures although their compositions change. As a result of the simulation studies based on the model described above, the basic parameters of combustion process, charge exchange, mixture formation in cylinders were obtained. The AVL Boost software is very useful for the piston engine performance simulations. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: aviation propulsion, AVL Boost, engine model, charge exchange, mixture formation

Procedia PDF Downloads 340
455 A Dynamic Model for Circularity Assessment of Nutrient Recovery from Domestic Sewage

Authors: Anurag Bhambhani, Jan Peter Van Der Hoek, Zoran Kapelan

Abstract:

The food system depends on the availability of Phosphorus (P) and Nitrogen (N). Growing population, depleting Phosphorus reserves and energy-intensive industrial nitrogen fixation are threats to their future availability. Recovering P and N from domestic sewage water offers a solution. Recovered P and N can be applied to agricultural land, replacing virgin P and N. Thus, recovery from sewage water offers a solution befitting a circular economy. To ensure minimum waste and maximum resource efficiency a circularity assessment method is crucial to optimize nutrient flows and minimize losses. Material Circularity Indicator (MCI) is a useful method to quantify the circularity of materials. It was developed for materials that remain within the market and recently extended to include biotic materials that may be composted or used for energy recovery after end-of-use. However, MCI has not been used in the context of nutrient recovery. Besides, MCI is time-static, i.e., it cannot account for dynamic systems such as the terrestrial nutrient cycles. Nutrient application to agricultural land is a highly dynamic process wherein flows and stocks change with time. The rate of recycling of nutrients in nature can depend on numerous factors such as prevailing soil conditions, local hydrology, the presence of animals, etc. Therefore, a dynamic model of nutrient flows with indicators is needed for the circularity assessment. A simple substance flow model of P and N will be developed with the help of flow equations and transfer coefficients that incorporate the nutrient recovery step along with the agricultural application, the volatilization and leaching processes, plant uptake and subsequent animal and human uptake. The model is then used for calculating the proportions of linear and restorative flows (coming from reused/recycled sources). The model will simulate the adsorption process based on the quantity of adsorbent and nutrient concentration in the water. Thereafter, the application of the adsorbed nutrients to agricultural land will be simulated based on adsorbate release kinetics, local soil conditions, hydrology, vegetation, etc. Based on the model, the restorative nutrient flow (returning to the sewage plant following human consumption) will be calculated. The developed methodology will be applied to a case study of resource recovery from wastewater. In the aforementioned case study located in Italy, biochar or zeolite is to be used for recovery of P and N from domestic sewage through adsorption and thereafter, used as a slow-release fertilizer in agriculture. Using this model, information regarding the efficiency of nutrient recovery and application can be generated. This can help to optimize the recovery process and application of the nutrients. Consequently, this will help to optimize nutrient recovery and application and reduce the dependence of the food system on the virgin extraction of P and N.

Keywords: circular economy, dynamic substance flow, nutrient cycles, resource recovery from water

Procedia PDF Downloads 198
454 Health Care Teams during COVID-19: Roles, Challenges, Emotional State and Perceived Preparedness to the Next Pandemic

Authors: Miriam Schiff, Hadas Rosenne, Ran Nir-Paz, Shiri Shinan Altman

Abstract:

To examine (1) the level, predictors, and subjective perception of professional quality of life (PRoQL), posttraumatic growth, roles, task changes during the pandemic, and perceived preparedness for the next pandemic. These variables were added as part of an international study on social workers in healthcare stress, resilience, and perceived preparedness we took part in, along with Australia, Canada, China, Hong Kong, Singapore, and Taiwan. (2) The extent to which background variables, rate of exposure to the virus, working in COVID wards, profession, personal resilience, and resistance to organizational change predict posttraumatic growth, perceived preparedness, and PRoQL (the latter was examined among social workers only). (3) The teams' perceptions of how the pandemic impacted them at the personal, professional, and organizational levels and what assisted them. Methodologies: Mixed quantitative and qualitative methods were used. 1039 hospital healthcare workers from various professions participated in the quantitative study while 32 participated in in-depth interviews. The same methods were used in six other countries. Findings: The level of PRoQL was moderate, with higher burnout and secondary traumatization level than during routine times. Differences between countries in the level of PRoQL were found as well. Perceived preparedness for the next pandemic at the personal level was moderate and similar among the different health professions. Higher exposure to the virus was associated with lower perceived preparedness of the hospitals. Compared to other professions, doctors and nurses perceived hospitals as significantly less prepared for the next pandemic. The preparedness of the State of Israel for the next pandemic is perceived as low by all healthcare professionals. A moderate level of posttraumatic growth was found. Staff who worked at the COVID ward reported a greater level of growth. Doctors reported the lowest level of growth. The staff's resilience was high, with no differences among professions or levels of exposure. Working in the COVID ward and resilience predicted better preparedness, while resistance to organizational change predicted worse preparedness. Findings from the qualitative part of the study revealed that healthcare workers reported challenges at the personal, professional and organizational level during the different waves of the pandemic. They also report on internal and external resources they either owned or obtained during that period. Conclusion: Exposure to the COVID-19 virus is associated with secondary traumatization on one hand and personal posttraumatic growth on the other hand. Personal and professional discoveries and a sense of mission helped cope with the pandemic that was perceived as a historical event, war, or mass casualty event. Personal resilience, along with the support of colleagues, family, and direct management, were seen as significant components of coping. Hospitals should plan ahead and improve their preparedness to the next pandemic.

Keywords: covid-19, health-care, social workers, burnout, preparedness, international perspective

Procedia PDF Downloads 74
453 Application of Typha domingensis Pers. in Artificial Floating for Sewage Treatment

Authors: Tatiane Benvenuti, Fernando Hamerski, Alexandre Giacobbo, Andrea M. Bernardes, Marco A. S. Rodrigues

Abstract:

Population growth in urban areas has caused damages to the environment, a consequence of the uncontrolled dumping of domestic and industrial wastewater. The capacity of some plants to purify domestic and agricultural wastewater has been demonstrated by several studies. Since natural wetlands have the ability to transform, retain and remove nutrients, constructed wetlands have been used for wastewater treatment. They are widely recognized as an economical, efficient and environmentally acceptable means of treating many different types of wastewater. T. domingensis Pers. species have shown a good performance and low deployment cost to extract, detoxify and sequester pollutants. Constructed Floating Wetlands (CFWs) consist of emergent vegetation established upon a buoyant structure, floating on surface waters. The upper parts of the vegetation grow and remain primarily above the water level, while the roots extend down in the water column, developing an extensive under water-level root system. Thus, the vegetation grows hydroponically, performing direct nutrient uptake from the water column. Biofilm is attached on the roots and rhizomes, and as physical and biochemical processes take place, the system functions as a natural filter. The aim of this study is to diagnose the application of macrophytes in artificial floating in the treatment of domestic sewage in south Brazil. The T. domingensis Pers. plants were placed in a flotation system (polymer structure), in full scale, in a sewage treatment plant. The sewage feed rate was 67.4 m³.d⁻¹ ± 8.0, and the hydraulic retention time was 11.5 d ± 1.3. This CFW treat the sewage generated by 600 inhabitants, which corresponds to 12% of the population served by this municipal treatment plant. During 12 months, samples were collected every two weeks, in order to evaluate parameters as chemical oxygen demand (COD), biochemical oxygen demand in 5 days (BOD5), total Kjeldahl nitrogen (TKN), total phosphorus, total solids, and metals. The average removal of organic matter was around 55% for both COD and BOD5. For nutrients, TKN was reduced in 45.9% what was similar to the total phosphorus removal, while for total solids the reduction was 33%. For metals, aluminum, copper, and cadmium, besides in low concentrations, presented the highest percentage reduction, 82.7, 74.4 and 68.8% respectively. Chromium, iron, and manganese removal achieved values around 40-55%. The use of T. domingensis Pers. in artificial floating for sewage treatment is an effective and innovative alternative in Brazilian sewage treatment systems. The evaluation of additional parameters in the treatment system may give useful information in order to improve the removal efficiency and increase the quality of the water bodies.

Keywords: constructed wetland, floating system, sewage treatment, Typha domingensis Pers.

Procedia PDF Downloads 212
452 Topographic and Thermal Analysis of Plasma Polymer Coated Hybrid Fibers for Composite Applications

Authors: Hande Yavuz, Grégory Girard, Jinbo Bai

Abstract:

Manufacturing of hybrid composites requires particular attention to overcome various critical weaknesses that are originated from poor interfacial compatibility. A large number of parameters have to be considered to optimize the interfacial bond strength either to avoid flaw sensitivity or delamination that occurs in composites. For this reason, surface characterization of reinforcement phase is needed in order to provide necessary data to drive an assessment of fiber-matrix interfacial compatibility prior to fabrication of composite structures. Compared to conventional plasma polymerization processes such as radiofrequency and microwave, dielectric barrier discharge assisted plasma polymerization is a promising process that can be utilized to modify the surface properties of carbon fibers in a continuous manner. Finding the most suitable conditions (e.g., plasma power, plasma duration, precursor proportion) for plasma polymerization of pyrrole in post-discharge region either in the presence or in the absence of p-toluene sulfonic acid monohydrate as well as the characterization of plasma polypyrrole coated fibers are the important aspects of this work. Throughout the current investigation, atomic force microscopy (AFM) and thermogravimetric analysis (TGA) are used to characterize plasma treated hybrid fibers (CNT-grafted Toray T700-12K carbon fibers, referred as T700/CNT). TGA results show the trend in the change of decomposition process of deposited polymer on fibers as a function of temperature up to 900 °C. Within the same period of time, all plasma pyrrole treated samples began to lose weight with relatively fast rate up to 400 °C which suggests the loss of polymeric structures. The weight loss between 300 and 600 °C is attributed to evolution of CO2 due to decomposition of functional groups (e.g. carboxyl compounds). With keeping in mind the surface chemical structure, the higher the amount of carbonyl, alcohols, and ether compounds, the lower the stability of deposited polymer. Thus, the highest weight loss is observed in 1400 W 45 s pyrrole+pTSA.H2O plasma treated sample probably because of the presence of less stable polymer than that of other plasma treated samples. Comparison of the AFM images for untreated and plasma treated samples shows that the surface topography may change on a microscopic scale. The AFM image of 1800 W 45 s treated T700/CNT fiber possesses the most significant increase in roughening compared to untreated T700/CNT fiber. Namely, the fiber surface became rougher with ~3.6 fold that of the T700/CNT fiber. The increase observed in surface roughness compared to untreated T700/CNT fiber may provide more contact points between fiber and matrix due to increased surface area. It is believed to be beneficial for their application as reinforcement in composites.

Keywords: hybrid fibers, surface characterization, surface roughness, thermal stability

Procedia PDF Downloads 234
451 One-Stage Conversion of Adjustable Gastric Band to One-Anastomosis Gastric Bypass Versus Sleeve Gastrectomy : A Single-Center Experience With a Short and Mid-term Follow-up

Authors: Basma Hussein Abdelaziz Hassan, Kareem Kamel, Philobater Bahgat Adly Awad, Karim Fahmy

Abstract:

Background: Laparoscopic adjustable gastric band was one of the most applied and common bariatric procedures in the last 8 years. However; the failure rate was very high, reaching approximately 60% of the patients not achieving the desired weight loss. Most patients sought another revisional surgery. In which, we compared two of the most common weight loss surgeries performed nowadays: the laparoscopic sleeve gastrectomy and laparoscopic one- anastomosis gastric bypass. Objective: To compare the weight loss and postoperative outcomes among patients undergoing conversion laparoscopic one-anastomosis gastric bypass (cOAGB) and laparoscopic sleeve gastrectomy (cSG) after a failed laparoscopic adjustable gastric band (LAGB). Patients and Methods: A prospective cohort study was conducted from June 2020 to June 2022 at a single medical center, which included 77 patients undergoing single-stage conversion to (cOAGB) vs (cSG). Patients were reassessed for weight loss, comorbidities remission, and post-operative complications at 6, 12, and 18 months. Results: There were 77 patients with failed LAGB in our study. Group (I) was 43 patients who underwent cOAGB and Group (II) was 34 patients who underwent cSG. The mean age of the cOAGB group was 38.58. While in the cSG group, the mean age was 39.47 (p=0.389). Of the 77 patients, 10 (12.99%) were males and 67 (87.01%) were females. Regarding Body mass index (BMI), in the cOAGB group the mean BMI was 41.06 and in the cSG group the mean BMI was 40.5 (p=0.042). The two groups were compared postoperative in relation to EBWL%, BMI, and the co-morbidities remission within 18 months follow-up. The BMI was calculated post-operative at three visits. After 6 months of follow-up, the mean BMI in the cOAGB group was 34.34, and the cSG group was 35.47 (p=0.229). In 12-month follow-up, the mean BMI in the cOAGB group was 32.69 and the cSG group was 33.79 (p=0.2). Finally, the mean BMI after 18 months of follow-up in the cOAGB group was 30.02, and in the cSG group was 31.79 (p=0.001). Both groups had no statistically significant values at 6 and 12 months follow-up with p-values of 0.229, and 0.2 respectively. However, patients who underwent cOAGB after 18 months of follow-up achieved lower BMI than those who underwent cSG with a statistically significant p-value of 0.005. Regarding EBWL% there was a statistically significant difference between the two groups. After 6 months of follow-up, the mean EBWL% in the cOAGB group was 35.9% and the cSG group was 33.14%. In the 12-month follow-up, the EBWL % mean in the cOAGB group was 52.35 and the cSG group was 48.76 (p=0.045). Finally, the mean EBWL % after 18 months of follow-up in the cOAGB group was 62.06 ±8.68 and in the cSG group was 55.58 ±10.87 (p=0.005). Regarding comorbidities remission; Diabetes mellitus remission was found in 22 (88%) patients in the cOAGB group and 10 (71.4%) patients in the cSG group with (p= 0.225). Hypertension remission was found in 20 (80%) patients in the cOAGB group and 14 (82.4%) patients in the cSG group with (p=1). In addition, dyslipidemia remission was found in 27(87%) patients in cOAGB group and 17(70%) patients in the cSG group with (p=0.18). Finally, GERD remission was found in about 15 (88.2%) patients in the cOAGB group and 6 (60%) patients in the cSG group with (p=0.47). There are no statistically significant differences between the two groups in the post-operative data outcomes. Conclusion: This study suggests that the conversion of LAGB to either cOAGB or cSG could be feasibly performed in a single-stage operation. cOAGB had a significant difference as regards the weight loss results than cSG among the mid-term follow-up. However, there is no significant difference in the postoperative complications and the resolution of the co-morbidities. Therefore, cOAGB could provide a reliable alternative but needs to be substantiated in future long-term studies.

Keywords: laparoscopic, gastric banding, one-anastomosis gastric bypass, Sleeve gastrectomy, revisional surgery, weight loss

Procedia PDF Downloads 63
450 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology

Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik

Abstract:

Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.

Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms

Procedia PDF Downloads 82
449 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study

Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio

Abstract:

Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.

Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees

Procedia PDF Downloads 196
448 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends

Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.

Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment

Procedia PDF Downloads 339
447 Application of Unstructured Mesh Modeling in Evolving SGE of an Airport at the Confluence of Multiple Rivers in a Macro Tidal Region

Authors: A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Among the various developing countries in the world like China, Malaysia, Korea etc., India is also developing its infrastructures in the form of Road/Rail/Airports and Waterborne facilities at an exponential rate. Mumbai, the financial epicenter of India is overcrowded and to relieve the pressure of congestion, Navi Mumbai suburb is being developed on the east bank of Thane creek near Mumbai. The government due to limited space at existing Mumbai Airports (domestic and international) to cater for the future demand of airborne traffic, proposes to build a new international airport near Panvel at Navi Mumbai. Considering the precedence of extreme rainfall on 26th July 2005 and nearby townships being in a low-lying area, wherein new airport is proposed, it is inevitable to study this complex confluence area from a hydrodynamic consideration under both tidal and extreme events (predicted discharge hydrographs), to avoid inundation of the surrounding due to the proposed airport reclamation (1160 hectares) and to determine the safe grade elevation (SGE). The model studies conducted using the application of unstructured mesh to simulate the Panvel estuarine area (93 km2), calibration, validation of a model for hydraulic field measurements and determine the maxima water levels around the airport for various extreme hydrodynamic events, namely the simultaneous occurrence of highest tide from the Arabian Sea and peak flood discharges (Probable Maximum Precipitation and 26th July 2005) from five rivers, the Gadhi, Kalundri, Taloja, Kasadi and Ulwe, meeting at the proposed airport area revealed that: (a) The Ulwe River flowing beneath the proposed airport needs to be diverted. The 120m wide proposed Ulwe diversion channel having a wider base width of 200 m at SH-54 Bridge on the Ulwe River along with the removal of the existing bund in Moha Creek is inevitable to keep the SGE of the airport to a minimum. (b) The clear waterway of 80 m at SH-54 Bridge (Ulwe River) and 120 m at Amra Marg Bridge near Moha Creek is also essential for the Ulwe diversion and (c) The river bank protection works on the right bank of Gadhi River between the NH-4B and SH-54 bridges as well as upstream of the Ulwe River diversion channel are essential to avoid inundation of low lying areas. The maxima water levels predicted around the airport keeps SGE to a minimum of 11m with respect to Chart datum of Ulwe Bundar and thus development is not only technologically-economically feasible but also sustainable. The unstructured mesh modeling is a promising tool to simulate complex extreme hydrodynamic events and provides a reliable solution to evolve optimal SGE of airport.

Keywords: airport, hydrodynamics, safe grade elevation, tides

Procedia PDF Downloads 262
446 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City

Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja

Abstract:

This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.

Keywords: NIHL, noise, sound level meter, tinnitus

Procedia PDF Downloads 204
445 Mental Well-Being and Quality of Life: A Comparative Study of Male Leather Tannery and Non-Tannery Workers of Kanpur City, India

Authors: Gyan Kashyap, Shri Kant Singh

Abstract:

Improved mental health can be articulated as a good physical health and quality of life. Mental health plays an important role in survival of any one’s life. In today’s time people living with stress in life due to their personal matters, health problems, unemployment, work environment, living environment, substance use, life style and many more important reasons. Many studies confirmed that the significant proportion of mental health people increasing in India. This study is focused on mental well-being of male leather tannery workers in Kanpur city, India. Environment at work place as well as living environment plays an important health risk factors among leather tannery workers. Leather tannery workers are more susceptible to many chemicals and physical hazards, just because they are liable to be affected by their exposure to lots of hazardous materials and processes during tanning work in very hazardous work environment. The aim of this study to determine the level of mental health disorder and quality of life among male leather tannery and non-tannery workers in Kanpur city, India. This study utilized the primary data from the cross- sectional household study which was conducted from January to June, 2015 on tannery and non-tannery workers as a part of PhD program from the Jajmau area of Kanpur city, India. The sample of 286 tannery and 295 non-tannery workers has been collected from the study area. We have collected information from the workers of age group 15-70 those who were working at the time of survey for at least one year. This study utilized the general health questionnaire (GHQ-12) and work related stress scale to test the mental wellbeing of male tannery and non-tannery workers. By using GHQ-12 and work related stress scale, Polychoric factor analysis method has been used for best threshold and scoring. Some of important question like ‘How would you rate your overall quality of life’ on Likert scale to measure the quality of life, their earnings, education, family size, living condition, household assets, media exposure, health expenditure, treatment seeking behavior and food habits etc. Results from the study revealed that around one third of tannery workers had severe mental health problems then non-tannery workers. Mental health problem shown the statistically significant association with wealth quintile, 56 percent tannery workers had severe mental health problem those belong to medium wealth quintile. And 42 percent tannery workers had moderate mental health problem among those from the low wealth quintile. Work related stress scale found the statistically significant results for tannery workers. Large proportion of tannery and non-tannery workers reported they are unable to meet their basic needs from their earnings and living in worst condition. Important result from the study, tannery workers who were involved in beam house work in tannery (58%) had severe mental health problem. This study found the statistically significant association with tannery work and mental health problem among tannery workers.

Keywords: GHQ-12, mental well-being, factor analysis, quality of life, tannery workers

Procedia PDF Downloads 388
444 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology

Authors: Mahdi Farajzadeh Ajirlou

Abstract:

Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.

Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter

Procedia PDF Downloads 43
443 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications

Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz

Abstract:

GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.

Keywords: biomaterial, GFP, nano-fibers, protein expression

Procedia PDF Downloads 320
442 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 107
441 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars

Authors: Tobias Schramm, Günther Prokop

Abstract:

In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.

Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber

Procedia PDF Downloads 17
440 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function

Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio

Abstract:

Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).

Keywords: algorithm, diabetes, laboratory medicine, non-invasive

Procedia PDF Downloads 34
439 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.

Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate

Procedia PDF Downloads 520
438 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
437 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State

Authors: Tomohiko Utsuki, Kyoka Sato

Abstract:

In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.

Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control

Procedia PDF Downloads 157
436 Energy Refurbishment of University Building in Cold Italian Climate: Energy Audit and Performance Optimization

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The Directive 2010/31/EC 'Directive of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings' moved the targets of the previous version toward more ambitious targets, for instance by establishing that, by 31 December 2020, all new buildings should demand nearly zero-energy. Moreover, the demonstrative role of public buildings is strongly affirmed so that also the target nearly zero-energy buildings is anticipated, in January 2019. On the other hand, given the very low turn-over rate of buildings (in Europe, it ranges between 1-3%/yearly), each policy that does not consider the renovation of the existing building stock cannot be effective in the short and medium periods. According to this proposal, the study provides a novel, holistic approach to design the refurbishment of educational buildings in colder cities of Mediterranean regions enabling stakeholders to understand the uncertainty to use numerical modelling and the real environmental and economic impacts of adopting some energy efficiency technologies. The case study is a university building of Molise region in the centre of Italy. The proposed approach is based on the application of the cost-optimal methodology as it is shown in the Delegate Regulation 244/2012 and Guidelines of the European Commission, for evaluating the cost-optimal level of energy performance with a macroeconomic approach. This means that the refurbishment scenario should correspond to the configuration that leads to lowest global cost during the estimated economic life-cycle, taking into account not only the investment cost but also the operational costs, linked to energy consumption and polluting emissions. The definition of the reference building has been supported by various in-situ surveys, investigations, evaluations of the indoor comfort. Data collection can be divided into five categories: 1) geometrical features; 2) building envelope audit; 3) technical system and equipment characterization; 4) building use and thermal zones definition; 5) energy building data. For each category, the required measures have been indicated with some suggestions for the identifications of spatial distribution and timing of the measurements. With reference to the case study, the collected data, together with a comparison with energy bills, allowed a proper calibration of a numerical model suitable for the hourly energy simulation by means of EnergyPlus. Around 30 measures/packages of energy, efficiency measure has been taken into account both on the envelope than regarding plant systems. Starting from results, two-point will be examined exhaustively: (i) the importance to use validated models to simulate the present performance of building under investigation; (ii) the environmental benefits and the economic implications of a deep energy refurbishment of the educational building in cold climates.

Keywords: energy simulation, modelling calibration, cost-optimal retrofit, university building

Procedia PDF Downloads 181
435 Rheological Characterization of Polysaccharide Extracted from Camelina Meal as a New Source of Thickening Agent

Authors: Mohammad Anvari, Helen S. Joyner (Melito)

Abstract:

Camelina sativa (L.) Crantz is an oilseed crop currently used for the production of biofuels. However, the low price of diesel and gasoline has made camelina an unprofitable crop for farmers, leading to declining camelina production in the US. Hence, the ability to utilize camelina byproduct (defatted meal) after oil extraction would be a pivotal factor for promoting the economic value of the plant. Camelina defatted meal is rich in proteins and polysaccharides. The great diversity in the polysaccharide structural features provides a unique opportunity for use in food formulations as thickeners, gelling agents, emulsifiers, and stabilizers. There is currently a great degree of interest in the study of novel plant polysaccharides, as they can be derived from readily accessible sources and have potential application in a wide range of food formulations. However, there are no published studies on the polysaccharide extracted from camelina meal, and its potential industrial applications remain largely underexploited. Rheological properties are a key functional feature of polysaccharides and are highly dependent on the material composition and molecular structure. Therefore, the objective of this study was to evaluate the rheological properties of the polysaccharide extracted from camelina meal at different conditions to obtain insight on the molecular characteristics of the polysaccharide. Flow and dynamic mechanical behaviors were determined under different temperatures (5-50°C) and concentrations (1-6% w/v). Additionally, the zeta potential of the polysaccharide dispersion was measured at different pHs (2-11) and a biopolymer concentration of 0.05% (w/v). Shear rate sweep data revealed that the camelina polysaccharide displayed shear thinning (pseudoplastic) behavior, which is typical of polymer systems. The polysaccharide dispersion (1% w/v) showed no significant changes in viscosity with temperature, which makes it a promising ingredient in products requiring texture stability over a range of temperatures. However, the viscosity increased significantly with increased concentration, indicating that camelina polysaccharide can be used in food products at different concentrations to produce a range of textures. Dynamic mechanical spectra showed similar trends. The temperature had little effect on viscoelastic moduli. However, moduli were strongly affected by concentration: samples exhibited concentrated solution behavior at low concentrations (1-2% w/v) and weak gel behavior at higher concentrations (4-6% w/v). These rheological properties can be used for designing and modeling of liquid and semisolid products. Zeta potential affects the intensity of molecular interactions and molecular conformation and can alter solubility, stability, and eventually, the functionality of the materials as their environment changes. In this study, the zeta potential value significantly decreased from 0.0 to -62.5 as pH increased from 2 to 11, indicating that pH may affect the functional properties of the polysaccharide. The results obtained in the current study showed that camelina polysaccharide has significant potential for application in various food systems and can be introduced as a novel anionic thickening agent with unique properties.

Keywords: Camelina meal, polysaccharide, rheology, zeta potential

Procedia PDF Downloads 245
434 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs

Authors: Regina A. Tayong, Reza Barati

Abstract:

A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.

Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation

Procedia PDF Downloads 132
433 Methodology for Risk Assessment of Nitrosamine Drug Substance Related Impurities in Glipizide Antidiabetic Formulations

Authors: Ravisinh Solanki, Ravi Patel, Chhaganbhai Patel

Abstract:

Purpose: The purpose of this study is to develop a methodology for the risk assessment and evaluation of nitrosamine impurities in Glipizide antidiabetic formulations. Nitroso compounds, including nitrosamines, have emerged as significant concerns in drug products, as highlighted by the ICH M7 guidelines. This study aims to identify known and potential sources of nitrosamine impurities that may contaminate Glipizide formulations and assess their presence. By determining observed or predicted levels of these impurities and comparing them with regulatory guidance, this research will contribute to ensuring the safety and quality of combination antidiabetic drug products on the market. Factors contributing to the presence of genotoxic nitrosamine contaminants in glipizide medications, such as secondary and tertiary amines, and nitroso group-complex forming molecules, will be investigated. Additionally, conditions necessary for nitrosamine formation, including the presence of nitrosating agents, and acidic environments, will be examined to enhance understanding and mitigation strategies. Method: The methodology for the study involves the implementation of the N-Nitroso Acid Precursor (NAP) test, as recommended by the WHO in 1978 and detailed in the 1980 International Agency for Research on Cancer monograph. Individual glass vials containing equivalent to 10mM quantities of Glipizide is prepared. These compounds are dissolved in an acidic environment and supplemented with 40 mM NaNO2. The resulting solutions are maintained at a temperature of 37°C for a duration of 4 hours. For the analysis of the samples, an HPLC method is employed for fit-for-purpose separation. LC resolution is achieved using a step gradient on an Agilent Eclipse Plus C18 column (4.6 X 100 mm, 3.5µ). Mobile phases A and B consist of 0.1% v/v formic acid in water and acetonitrile, respectively, following a gradient mode program. The flow rate is set at 0.6 mL/min, and the column compartment temperature is maintained at 35°C. Detection is performed using a PDA detector within the wavelength range of 190-400 nm. To determine the exact mass of formed nitrosamine drug substance related impurities (NDSRIs), the HPLC method is transferred to LC-TQ-MS/MS with the same mobile phase composition and gradient program. The injection volume is set at 5 µL, and MS analysis is conducted in Electrospray Ionization (ESI) mode within the mass range of 100−1000 Daltons. Results: The samples of NAP test were prepared according to the protocol. The samples were analyzed using HPLC and LC-TQ-MS/MS identify possible NDSRIs generated in different formulations of glipizide. It was found that the NAP test generated a various NDSRIs. The new finding, which has not been reported yet, discovered contamination of Glipizide. These NDSRIs are categorised based on the predicted carcinogenic potency and recommended its acceptable intact in medicines. The analytical method was found specific and reproducible.

Keywords: NDSRI, nitrosamine impurities, antidiabetic, glipizide, LC-MS/MS

Procedia PDF Downloads 37
432 Optimizing the Effectiveness of Docetaxel with Solid Lipid Nanoparticles: Formulation, Characterization, in Vitro and in Vivo Assessment

Authors: Navid Mosallaei, Mahmoud Reza Jaafari, Mohammad Yahya Hanafi-Bojd, Shiva Golmohammadzadeh, Bizhan Malaekeh-Nikouei

Abstract:

Background: Docetaxel (DTX), a potent anticancer drug derived from the European yew tree, is effective against various human cancers by inhibiting microtubule depolymerization. Solid lipid nanoparticles (SLNs) have gained attention as drug carriers for enhancing drug effectiveness and safety. SLNs, submicron-sized lipid-based particles, can passively target tumors through the "enhanced permeability and retention" (EPR) effect, providing stability, drug protection, and controlled release while being biocompatible. Methods: The SLN formulation included biodegradable lipids (Compritol and Precirol), hydrogenated soy phosphatidylcholine (H-SPC) as a lipophilic co-surfactant, and Poloxamer 188 as a non-ionic polymeric stabilizer. Two SLN preparation techniques, probe sonication and microemulsion, were assessed. Characterization encompassed SLNs' morphology, particle size, zeta potential, matrix, and encapsulation efficacy. In-vitro cytotoxicity and cellular uptake studies were conducted using mouse colorectal (C-26) and human malignant melanoma (A-375) cell lines, comparing SLN-DTX with Taxotere®. In-vivo studies evaluated tumor inhibitory efficacy and survival in mice with colorectal (C-26) tumors, comparing SLNDTX withTaxotere®. Results: SLN-DTX demonstrated stability, with an average size of 180 nm and a low polydispersity index (PDI) of 0.2 and encapsulation efficacy of 98.0 ± 0.1%. Differential scanning calorimetry (DSC) suggested amorphous encapsulation of DTX within SLNs. In vitro studies revealed that SLN-DTX exhibited nearly equivalent cytotoxicity to Taxotere®, depending on concentration and exposure time. Cellular uptake studies demonstrated superior intracellular DTX accumulation with SLN-DTX. In a C-26 mouse model, SLN-DTX at 10 mg/kg outperformed Taxotere® at 10 and 20 mg/kg, with no significant differences in body weight changes and a remarkably high survival rate of 60%. Conclusion: This study concludes that SLN-DTX, prepared using the probe sonication, offers stability and enhanced therapeutic effects. It displayed almost same in vitro cytotoxicity to Taxotere® but showed superior cellular uptake. In a mouse model, SLN-DTX effectively inhibited tumor growth, with 10 mg/kg outperforming even 20 mg/kg of Taxotere®, without adverse body weight changes and with higher survival rates. This suggests that SLN-DTX has the potential to reduce adverse effects while maintaining or enhancing docetaxel's therapeutic profile, making it a promising drug delivery strategy suitable for industrialization.

Keywords: docetaxel, Taxotere®, solid lipid nanoparticles, enhanced permeability and retention effect, drug delivery, cancer chemotherapy, cytotoxicity, cellular uptake, tumor inhibition

Procedia PDF Downloads 83