Search results for: efficient energy values
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18086

Search results for: efficient energy values

1166 Ramadan as a Model of Intermittent Fasting: Effects on Gut Hormones, Appetite and Body Composition in Diabetes vs. Controls

Authors: Turki J. Alharbi, Jencia Wong, Dennis Yue, Tania P. Markovic, Julie Hetherington, Ted Wu, Belinda Brooks, Radhika Seimon, Alice Gibson, Stephanie L. Silviera, Amanda Sainsbury, Tanya J. Little

Abstract:

Fasting has been practiced for centuries and is incorporated into the practices of different religions including Islam, whose followers intermittently fast throughout the month of Ramadan. Thus, Ramadan presents a unique model of prolonged intermittent fasting (IF). Despite a growing body of evidence for a cardio-metabolic and endocrine benefit of IF, detailed studies of the effects of IF on these indices in type 2 diabetes are scarce. We studied 5 subjects with type 2 diabetes (T2DM) and 7 healthy controls (C) at baseline (pre), and in the last week of Ramadan (post). Fasting circulating levels of glucose, HbA1c and lipids, as well as body composition (with DXA) and resting energy expenditure (REE) were measured. Plasma gut hormone levels and appetite responses to a mixed meal were also studied. Data are means±SEM. Ramadan decreased total fat mass (-907±92 g, p=0.001) and trunk fat (-778±190 g, p=0.014) in T2DM but not in controls, without any reductions in lean mass or REE. There was a trend towards a decline in plasma FFA in both groups. Ramadan had no effect on body weight, glycemia, blood pressure, or plasma lipids in either group. In T2DM only, the area under the curve for post-meal plasma ghrelin concentrations increased after Ramadan (pre:6632±1737 vs. post:9025±2518 pg/ml.min-1, p=0.045). Despite this increase in orexigenic ghrelin, subjective appetite scores were not altered by Ramadan. Meal-induced plasma concentrations of the satiety hormone pancreatic polypeptide did not change during Ramadan, but were higher in T2DM compared to controls (post: C: 23486±6677 vs. T2DM: 62193±6880 pg/ml.min-1, p=0.003. In conclusion, Ramadan, as a model for IF appears to have more favourable effects on body composition in T2DM, without adverse effects on metabolic control or subjective appetite. These data suggest that IF may be particularly beneficial in T2DM as a nutritional intervention. Larger studies are warranted.

Keywords: type 2 diabetes, obesity, intermittent fasting, appetite regulating hormones

Procedia PDF Downloads 312
1165 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming

Authors: David Muyise

Abstract:

Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.

Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing

Procedia PDF Downloads 129
1164 Convective Boiling of CO₂/R744 in Macro and Micro-Channels

Authors: Adonis Menezes, J. C. Passos

Abstract:

The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.

Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels

Procedia PDF Downloads 143
1163 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 56
1162 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP

Procedia PDF Downloads 91
1161 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test

Authors: Arvind Salem

Abstract:

Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.

Keywords: commissions, elections, gerrymandering, redistricting

Procedia PDF Downloads 73
1160 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 168
1159 Thermal Decomposition Behaviors of Hexafluoroethane (C2F6) Using Zeolite/Calcium Oxide Mixtures

Authors: Kazunori Takai, Weng Kaiwei, Sadao Araki, Hideki Yamamoto

Abstract:

HFC and PFC gases have been commonly and widely used as refrigerant of air conditioner and as etching agent of semiconductor manufacturing process, because of their higher heat of vaporization and chemical stability. On the other hand, HFCs and PFCs gases have the high global warming effect on the earth. Therefore, we have to be decomposed these gases emitted from chemical apparatus like as refrigerator. Until now, disposal of these gases were carried out by using combustion method like as Rotary kiln treatment mainly. However, this treatment needs extremely high temperature over 1000 °C. In the recent year, in order to reduce the energy consumption, a hydrolytic decomposition method using catalyst and plasma decomposition treatment have been attracted much attention as a new disposal treatment. However, the decomposition of fluorine-containing gases under the wet condition is not able to avoid the generation of hydrofluoric acid. Hydrofluoric acid is corrosive gas and it deteriorates catalysts in the decomposition process. Moreover, an additional process for the neutralization of hydrofluoric acid is also indispensable. In this study, the decomposition of C2F6 using zeolite and zeolite/CaO mixture as reactant was evaluated in the dry condition at 923 K. The effect of the chemical structure of zeolite on the decomposition reaction was confirmed by using H-Y, H-Beta, H-MOR and H-ZSM-5. The formation of CaF2 in zeolite/CaO mixtures after the decomposition reaction was confirmed by XRD measurements. The decomposition of C2F6 using zeolite as reactant showed the closely similar behaviors regardless the type of zeolite (MOR, Y, ZSM-5, Beta type). There was no difference of XRD patterns of each zeolite before and after reaction. On the other hand, the difference in the C2F6 decomposition for each zeolite/CaO mixtures was observed. These results suggested that the rate-determining process for the C2F6 decomposition on zeolite alone is the removal of fluorine from reactive site. In other words, the C2F6 decomposition for the zeolite/CaO improved compared with that for the zeolite alone by the removal of the fluorite from reactive site. HMOR/CaO showed 100% of the decomposition for 3.5 h and significantly improved from zeolite alone. On the other hand, Y type zeolite showed no improvement, that is, the almost same value of Y type zeolite alone. The descending order of C2F6 decomposition was MOR, ZSM-5, beta and Y type zeolite. This order is similar to the acid strength characterized by NH3-TPD. Hence, it is considered that the C-F bond cleavage is closely related to the acid strength.

Keywords: hexafluoroethane, zeolite, calcium oxide, decomposition

Procedia PDF Downloads 481
1158 Effects of Two Distinct Monsoon Seasons on the Water Quality of a Tropical Crater Lake

Authors: Maurice A. Duka, Leobel Von Q. Tamayo, Niño Carlo I. Casim

Abstract:

The paucity of long-term measurements and monitoring of accurate water quality parameter profiles is evident for small and deep tropical lakes in Southeast Asia. This leads to a poor understanding of the stratification and mixing dynamics of these lakes in the region. The water quality dynamics of Sampaloc Lake, a tropical crater lake (104 ha, 27 m deep) in the Philippines, were investigated to understand how monsoon-driven conditions impact water quality and ecological health. Located in an urban area with approximately 10% of its surface area allocated to aquaculture, the lake is subject to distinct seasonal changes associated with the Northeast (NE) and Southwest (SW) monsoons. NE Monsoon typically occurs from October to April, while SW monsoon from May to September. These monsoons influence the lake’s water temperature, dissolved oxygen (DO), chlorophyll-α (chl-α), phycocyanin (PC), and turbidity, leading to significant seasonal variability. Monthly field observations of water quality parameters were made from October 2022 to September 2023 using a multi-parameter probe, YSI ProDSS, together with the collection of meteorological data during the same period. During the NE monsoon, cooler air temperatures and winds with sustained speeds caused surface water temperatures to drop from 30.9 ºC in October to 25.5 ºC in January, resulting in the weakening of stratification and eventually in lake turnover. This turnover redistributed nutrients from hypolimnetic layers to surface layers, increasing chl-α and PC levels (14-41 and 0-2 µg/L) throughout the water column. The fish kill was also observed during the lake’s turnover event as a result of the mixing of hypoxic hypolimnetic waters. Turbidity levels (0-3 NTU) were generally low but showed mid-column peaks in October, which was linked to thermocline-related effects, while low values in November followed heavy rainfall dilution and mixing effects. Conversely, the SW monsoon showed increased surface temperatures (28-30 ºC), shallow thermocline formations (3-11 m), and lower surface chl-α and PC levels (2-8 and 0-0.5 µg/L, respectively), likely due to limited nutrient mixing and more stable stratification. Turbidity was notably higher also in July (11-15 NTU) due to intense rainfall and reduced light penetration, which minimized photosynthetic activity. The SW monsoon also coincided with the typhoon season in the study area, resulting in partial upwelling of nutrients during strong storm events. These findings emphasize the need for continued monitoring of Sampaloc Lake’s seasonal water quality patterns, as monsoon-driven changes are crucial to maintaining its ecological balance and sustainability.

Keywords: seasonal water quality dynamics, Philippine tropical lake, monsoon-driven conditions, stratification and mixing

Procedia PDF Downloads 10
1157 Therapeutic Challenges in Treatment of Adults Bacterial Meningitis Cases

Authors: Sadie Namani, Lindita Ajazaj, Arjeta Zogaj, Vera Berisha, Bahrije Halili, Luljeta Hasani, Ajete Aliu

Abstract:

Background: The outcome of bacterial meningitis is strongly related to the resistance of bacterial pathogens to the initial antimicrobial therapy. The objective of the study was to analyze the initial antimicrobial therapy, the resistance of meningeal pathogens and the outcome of adults bacterial meningitis cases. Materials/methods: This prospective study enrolled 46 adults older than 16 years of age, treated for bacterial meningitis during the years 2009 and 2010 at the infectious diseases clinic in Prishtinë. Patients are categorized into specific age groups: > 16-26 years of age (10 patients), > 26-60 years of age (25 patients) and > 60 years of age (11 patients). All p-values < 0.05 were considered statistically significant. Data were analyzed using Stata 7.1 and SPSS 13. Results: During the two year study period 46 patients (28 males) were treated for bacterial meningitis. 33 patients (72%) had a confirmed bacterial etiology; 13 meningococci, 11 pneumococci, 7 gram-negative bacilli (Ps. aeruginosa 2, Proteus sp. 2, Acinetobacter sp. 2 and Klebsiella sp. 1 case) and 2 staphylococci isolates were found. Neurological complications developed in 17 patients (37%) and the overall mortality rate was 13% (6 deaths). Neurological complications observed were: cerebral abscess (7/46; 15.2%), cerebral edema (4/46; 8.7%); haemiparesis (3/46; 6.5%); recurrent seizures (2/46; 4.3%), and single cases of thrombosis sinus cavernosus, facial nerve palsy and decerebration (1/46; 2.1%). The most common meningeal pathogens were meningococcus in the youngest age group, gram negative-bacilli in second age group and pneumococcus in eldery age group. Initial single-agent antibiotic therapy (ceftriaxone) was used in 17 patients (37%): in 60% of patients in the youngest age group and in 44% of cases in the second age group. 29 patients (63%) were treated with initial dual-agent antibiotic therapy; ceftriaxone in combination with vancomycin or ampicillin. Ceftriaxone and ampicillin were the most commonly used antibiotics for the initial empirical therapy in adults > 50 years of age. All adults > 60 years of age were treated with the initial dual-agent antibiotic therapy as in this age group was recorded the highest mortality rate (M=27%) and adverse outcome (64%). Resistance of pathogens to antimicrobics was recorded in cases caused by gram-negative bacilli and was associated with greater risk for developing neurological complications (p=0.09). None of the gram-negative bacilli were resistant to carbapenems; all were resistant to ampicillin while 5/7 isolates were resistant to cefalosporins. Resistance of meningococci and pneumococci to beta-lactams was not recorded. There were no statistical differences in the occurrence of neurological complications (p > 0.05), resistance of meningeal pathogens to antimicrobics (p > 0.05) and the inital antimicrobial therapy (one vs. two antibiotics) concerning group-ages in adults. Conclusions: The initial antibiotic therapy with ceftriaxone alone or in combination with vancomycin or ampicillin did not cover cases caused by gram-negative bacilli.

Keywords: adults, bacterial meningitis, outcomes, therapy

Procedia PDF Downloads 173
1156 A Case Study on an Integrated Analysis of Well Control and Blow out Accident

Authors: Yasir Memon

Abstract:

The complexity and challenges in the offshore industry are increasing more than the past. The oil and gas industry is expanding every day by accomplishing these challenges. More challenging wells such as longer and deeper are being drilled in today’s environment. Blowout prevention phenomena hold a worthy importance in oil and gas biosphere. In recent, so many past years when the oil and gas industry was growing drilling operation were extremely dangerous. There was none technology to determine the pressure of reservoir and drilling hence was blind operation. A blowout arises when an uncontrolled reservoir pressure enters in wellbore. A potential of blowout in the oil industry is the danger for the both environment and the human life. Environmental damage, state/country regulators, and the capital investment causes in loss. There are many cases of blowout in the oil the gas industry caused damage to both human and the environment. A huge capital investment is being in used to stop happening of blowout through all over the biosphere to bring damage at the lowest level. The objective of this study is to promote safety and good resources to assure safety and environmental integrity in all operations during drilling. This study shows that human errors and management failure is the main cause of blowout therefore proper management with the wise use of precautions, prevention methods or controlling techniques can reduce the probability of blowout to a minimum level. It also discusses basic procedures, concepts and equipment involved in well control methods and various steps using at various conditions. Furthermore, another aim of this study work is to highlight management role in oil gas operations. Moreover, this study analyze the causes of Blowout of Macondo well occurred in the Gulf of Mexico on April 20, 2010, and deliver the recommendations and analysis of various aspect of well control methods and also provides the list of mistakes and compromises that British Petroleum and its partner were making during drilling and well completion methods and also the Macondo well disaster happened due to various safety and development rules violation. This case study concludes that Macondo well blowout disaster could be avoided with proper management of their personnel’s and communication between them and by following safety rules/laws it could be brought to minimum environmental damage.

Keywords: energy, environment, oil and gas industry, Macondo well accident

Procedia PDF Downloads 186
1155 Cycle-Oriented Building Components and Constructions Made from Paper Materials

Authors: Rebecca Bach, Evgenia Kanli, Nihat Kiziltoprak, Linda Hildebrand, Ulrich Knaack, Jens Schneider

Abstract:

The building industry has a high demand for resources and at the same time is responsible for a significant amount of waste created worldwide. Today's building components need to contribute to the protection of natural resources without creating waste. This is defined in the product development phase and impacts the product’s degree of being cycle-oriented. Paper-based materials show advantage due to their renewable origin and their ability to incorporate different functions. Besides the ecological aspects like renewable origin and recyclability the main advantages of paper materials are its light-weight but stiff structure, the optimized production processes and good insulation values. The main deficits from building technology’s perspective are the material's vulnerability to humidity and water as well as inflammability. On material level, those problems can be solved by coatings or through material modification. On construction level intelligent setup and layering of a building component can improve and also solve these issues. The target of the present work is to provide an overview of developed building components and construction typologies mainly made from paper materials. The research is structured in four parts: (1) functions and requirements, (2) preselection of paper-based materials, (3) development of building components and (4) evaluation. As part of the research methodology at first the needs of the building sector are analyzed with the aim to define the main areas of application and consequently the requirements. Various paper materials are tested in order to identify to what extent the requirements are satisfied and determine potential optimizations or modifications, also in combination with other construction materials. By making use of the material’s potentials and solving the deficits on material and on construction level, building components and construction typologies are developed. The evaluation and the calculation of the structural mechanics and structural principals will show that different construction typologies can be derived. Profiles like paper tubes can be used at best for skeleton constructions. Massive structures on the other hand can be formed by plate-shaped elements like solid board or honeycomb. For insulation purposes corrugated cardboard or cellulose flakes have the best properties, while layered solid board can be applied to prevent inner condensation. Enhancing these properties by material combinations for instance with mineral coatings functional constructions mainly out of paper materials were developed. In summary paper materials offer a huge variety of possible applications in the building sector. By these studies a general base of knowledge about how to build with paper was developed and is to be reinforced by further research.

Keywords: construction typologies, cycle-oriented construction, innovative building material, paper materials, renewable resources

Procedia PDF Downloads 277
1154 Erasmus+ Program in Vocational Education: Effects of European International Mobility in Portuguese Vocational Schools

Authors: José Carlos Bronze, Carlinda Leite, Angélica Monteiro

Abstract:

The creation of the Erasmus Program in 1987 represented a milestone in promoting and funding international mobility in higher education in Europe. Its effects were so significant that they influenced the creation of the European Higher Education Area through the Bologna Process and ensured the program’s continuation and maintenance. Over the last decades, the escalating figures of participants and funds instigated significant scientific studies on the program's effects on higher education. More recently, in 2014, the program was renamed “Erasmus+” when it expanded into other fields of education, namely Vocational Education and Training (VET). Despite being now running in this field of education for a decade (2014-2024), its effects on VET remain less studied and less known, while the higher education field keeps attracting researchers’ attention. Given this gap, it becomes relevant to study the effects of E+ on VET, particularly in the priority domains of the Program: “Inclusion and Diversity,” “Participation in Democratic Life, Common Values and Civic Engagement,” “Environment and Fight Against Climate Change,” and “Digital Transformation.” This latter has been recently emphasized due to the COVID-19 pandemic that forced the so-called emergency remote teaching, leading schools to quickly transform and adapt to a new reality regardless of the preparedness levels of teachers and students. Together with the remaining E+ priorities, they directly relate to an emancipatory perspective of education sustained in soft skills such as critical thinking, intercultural awareness, autonomy, active citizenship, teamwork, and problem-solving, among others. Based on this situation, it is relevant to know the effects of E+ on the VET field, namely questioning how international mobility instigates digitalization processes and supports emancipatory queries therein. As an education field that more directly connects to hard skills and an instrumental approach oriented to the labor market’s needs, a study was conducted to determine the effects of international mobility on developing digital literacy and soft skills in the VET field. In methodological terms, the study used semi-structured interviews with teaching and non-teaching staff from three VET schools who are strongly active in the E+ Program. The interviewees were three headmasters, four mobility project managers, and eight teachers experienced in international mobility. The data was subjected to qualitative content analysis using the NVivo 14 application. The results show that E+ international mobility promotes and facilitates the use of digital technologies as a pedagogical resource at VET schools and enhances and generates students’ soft skills. In conclusion, E+ mobility in the VET field supports adopting the program’s priorities by increasing the teachers’ knowledge and use of digital resources and amplifying and generating participants’ soft skills.

Keywords: Erasmus international mobility, digital literacy, soft skills, vocational education and training

Procedia PDF Downloads 32
1153 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study

Authors: Jayashree Aanand, Gajjam

Abstract:

The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.

Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit

Procedia PDF Downloads 126
1152 Genetics of Pharmacokinetic Drug-Drug Interactions of Most Commonly Used Drug Combinations in the UK: Uncovering Unrecognised Associations

Authors: Mustafa Malki, Ewan R. Pearson

Abstract:

Tools utilized by health care practitioners to flag potential adverse drug reactions secondary to drug-drug interactions ignore individual genetic variation, which has the potential to markedly alter the severity of these interactions. To our best knowledge, there have been limited published studies on the impact of genetic variation on drug-drug interactions. Therefore, our aim in this project is the discovery of previously unrecognized, clinically important drug-drug-gene interactions (DDGIs) within the list of most commonly used drug combinations in the UK. The UKBB database was utilized to identify the top most frequently prescribed drug combinations in the UK with at least one route of interaction (over than 200 combinations were identified). We have recognised 37 common and unique interacting genes considering all of our drug combinations. Out of around 600 potential genetic variants found in these 37 genes, 100 variants have met the selection criteria (common variant with minor allele frequency ≥ 5%, independence, and has passed HWE test). The association between these variants and the use of each of our top drug combinations has been tested with a case-control analysis under the log-additive model. As the data is cross-sectional, drug intolerance has been identified from the genotype distribution as presented by the lower percentage of patients carrying the risky allele and on the drug combination compared to those free of these risk factors and vice versa with drug tolerance. In GoDARTs database, the same list of common drug combinations identified by the UKBB was utilized here with the same list of candidate genetic variants but with the addition of 14 new SNPs so that we have a total of 114 variants which have met the selection criteria in GoDARTs. From the list of the top 200 drug combinations, we have selected 28 combinations where the two drugs in each combination are known to be used chronically. For each of our 28 combinations, three drug response phenotypes have been identified (drug stop/switch, dose decrease, or dose increase of any of the two drugs during their interaction). The association between each of the three phenotypes belonging to each of our 28 drug combinations has been tested against our 114 candidate genetic variants. The results show replication of four findings between both databases : (1) Omeprazole +Amitriptyline +rs2246709 (A > G) variant in CYP3A4 gene (p-values and ORs with the UKBB and GoDARTs respectively = 0.048,0.037,0.92,and 0.52 (dose increase phenotype)) (2) Simvastatin + Ranitidine + rs9332197 (T > C) variant in CYP2C9 gene (0.024,0.032,0.81, and 5.75 (drug stop/switch phenotype)) (3) Atorvastatin + Doxazosin + rs9282564 (T > C) variant in ABCB1 gene (0.0015,0.0095,1.58,and 3.14 (drug stop/switch phenotype)) (4) Simvastatin + Nifedipine + rs2257401 (C > G) variant in CYP3A7 gene (0.025,0.019,0.77,and 0.30 (drug stop/switch phenotype)). In addition, some other non-replicated, but interesting, significant findings were detected. Our work also provides a great source of information for researchers interested in DD, DG, or DDG interactions studies as it has highlighted the top common drug combinations in the UK with recognizing 114 significant genetic variants related to drugs' pharmacokinetic.

Keywords: adverse drug reactions, common drug combinations, drug-drug-gene interactions, pharmacogenomics

Procedia PDF Downloads 163
1151 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products

Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie

Abstract:

Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).

Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels

Procedia PDF Downloads 179
1150 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 19
1149 Stability Study of Hydrogel Based on Sodium Alginate/Poly (Vinyl Alcohol) with Aloe Vera Extract for Wound Dressing Application

Authors: Klaudia Pluta, Katarzyna Bialik-Wąs, Dagmara Malina, Mateusz Barczewski

Abstract:

Hydrogel networks, due to their unique properties, are highly attractive materials for wound dressing. The three-dimensional structure of hydrogels provides tissues with optimal moisture, which supports the wound healing process. Moreover, a characteristic feature of hydrogels is their absorption properties which allow for the absorption of wound exudates. For the fabrication of biomedical hydrogels, a combination of natural polymers ensuring biocompatibility and synthetic ones that provide adequate mechanical strength are often used. Sodium alginate (SA) is one of the polymers widely used in wound dressing materials because it exhibits excellent biocompatibility and biodegradability. However, due to poor strength properties, often alginate-based hydrogel materials are enhanced by the addition of another polymer such as poly(vinyl alcohol) (PVA). This paper is concentrated on the preparation methods of sodium alginate/polyvinyl alcohol hydrogel system incorporating Aloe vera extract and glycerin for wound healing material with particular focus on the role of their composition on structure, thermal properties, and stability. Briefly, the hydrogel preparation is based on the chemical cross-linking method using poly(ethylene glycol) diacrylate (PEGDA, Mn = 700 g/mol) as a crosslinking agent and ammonium persulfate as an initiator. In vitro degradation tests of SA/PVA/AV hydrogels were carried out in Phosphate-Buffered Saline (pH – 7.4) as well as in distilled water. Hydrogel samples were firstly cut into half-gram pieces (in triplicate) and immersed in immersion fluid. Then, all specimens were incubated at 37°C and then the pH and conductivity values were measurements at time intervals. The post-incubation fluids were analyzed using SEC/GPC to check the content of oligomers. The separation was carried out at 35°C on a poly(hydroxy methacrylate) column (dimensions 300 x 8 mm). 0.1M NaCl solution, whose flow rate was 0.65 ml/min, was used as the mobile phase. Three injections with a volume of 50 µl were made for each sample. The thermogravimetric data of the prepared hydrogels were collected using a Netzsch TG 209 F1 Libra apparatus. The samples with masses of about 10 mg were weighed separately in Al2O3 crucibles and then were heated from 30°C to 900°C with a scanning rate of 10 °C∙min−1 under a nitrogen atmosphere. Based on the conducted research, a fast and simple method was developed to produce potential wound dressing material containing sodium alginate, poly(vinyl alcohol) and Aloe vera extract. As a result, transparent and flexible SA/PVA/AV hydrogels were obtained. The degradation experiments indicated that most of the samples immersed in PBS as well as in distilled water were not degraded throughout the whole incubation time.

Keywords: hydrogels, wound dressings, sodium alginate, poly(vinyl alcohol)

Procedia PDF Downloads 164
1148 Structure and Dimensions Of Teacher Professional Identity

Authors: Vilma Zydziunaite, Gitana Balezentiene, Vilma Zydziunaite

Abstract:

Teaching is one of most responsible profession, and it is not only a job of an artisan. This profes-sion needs a developed ability to identify oneself with the chosen teaching profession. Research questions: How teachers characterize their authentic individual professional identity? What factors teachers exclude, which support and limit the professional identity? Aim was to develop the grounded theory (GT) about teacher’s professional identity (TPI). Research methodology is based on Charmaz GT version. Data were collected via semi-structured interviews with the he sample of 12 teachers. Findings. 15 extracted categories revealed that the core of TPI is teacher’s professional calling. Premises of TPI are family support, motives for choos-ing teacher’s profession, teacher’s didactic competence. Context of TPI consists of teacher compli-ance with the profession, purposeful preparation for pedagogical studies, professional growth. The strategy of TPI is based on teacher relationship with school community strengthening. The profes-sional frustration limits the TPI. TPI outcome includes teacher recognition, authority; professional mastership, professionalism, professional satisfaction. Dimensions of TPI GT the past (reaching teacher’s profession), present (teacher’s commitment to professional activity) and future (teacher’s profession reconsideration). Conclusions. The substantive GT describes professional identity as complex, changing and life-long process, which develops together with teacher’s personal identity and is connected to professional activity. The professional decision "to be a teacher" is determined by the interaction of internal (professional vocation, personal characteristics, values, self-image, talents, abilities) and external (family, friends, school community, labor market, working condi-tions) factors. The dimensions of the TPI development includes: the past (the pursuit of the teaching profession), the present (the teacher's commitment to professional activity) and the future (the revi-sion of the teaching profession). A significant connection emerged - as the teacher's professional commitment strengthens (creating a self-image, growing the teacher's professional experience, recognition, professionalism, mastery, satisfaction with pedagogical activity), the dimension of re-thinking the teacher's profession weakens. This proves that professional identity occupies an im-portant place in a teacher's life and it affects his professional success and job satisfaction. Teachers singled out the main factors supporting a teacher's professional identity: their own self-image per-ception, professional vocation, positive personal qualities, internal motivation, teacher recognition, confidence in choosing a teaching profession, job satisfaction, professional knowledge, professional growth, good relations with the school community, pleasant experiences, quality education process, excellent student achievements.

Keywords: grounded theory, teacher professional identity, semi-structured interview, school, students, school community, family

Procedia PDF Downloads 74
1147 Political Skills in Social Management and Responsibility of Media Studies

Authors: Musa Miah

Abstract:

Society and social activities are directly governed by political sociology. Political sociology has an impact on the whole of human society, the interrelationships of people in society, social responsibilities and duties, the nature of society, society and culture of different countries, conducting social activities, social change and social development. Through this, the correct knowledge and decision are made by analyzing the complexities of society in different ways. In modern civilized society, people need to get accurate knowledge about how they live, their behavior, customs and principles. The need for political sociology is undeniable, even if new plans are to be adopted for the development of society. The importance of practicing political sociology is immense if any country, nation, or society is to move forward on the path of sustainable development. Research has shown that political sociology is an essential aspect of the social impact of development, sociological analysis of poverty or underdevelopment, development of human values in individual life. The importance of political sociology for knowing the overall aspect of society is undeniable. Because, to know about social problems, to identify social problems, to find the cause of any social problem, one needs to know political sociology. Apart from knowing about the class structure of the society, people of different classes and professions live in the society. It is possible to gain knowledge about. He is also involved in various societies, communities and groups in the country and abroad. Therefore, research has shown that in order to successfully solve any task of the society, it is necessary to know the society in full. Media Studies: Media studies are directly related to socialization. Media strategy has had a positive impact on the management and direction of society. At present, Media Studies in Bangladesh is working towards providing opportunities for up-to-date and quality higher education. Introduced Department of Journalism, Communication and Media Studies in different universities of Bangladesh. The department has gained immense popularity since its inception. Here the top degree holders, as well as eminent editors, senior journalists, writers and researchers, are giving their opinions. Now there is ample scope for research in newspapers, magazines, radio, television and online media as well as outside of work as an advertising-documentary filmmaker or in domestic and foreign NGOs or other corporate organizations. According to the study, media studies have had a positive impact on the media in Bangladesh, especially television channels, the expansion and development of online media and the creation of clear ideas about communication, journalism and the media. Workshops, seminars and discussions are being held on contemporary national and international issues in addition to theoretical concepts. Journalism, communication and mass media are quite exceptional and challenging compared to the traditional subjects considering the present times. In this regard, there is a unique opportunity to build a modern society with taste and personality without mere employment.

Keywords: Bangladesh, Dhaka, social activities, political sociology

Procedia PDF Downloads 150
1146 Improving Contributions to the Strengthening of the Legislation Regarding Road Infrastructure Safety Management in Romania, Case Study: Comparison Between the Initial Regulations and the Clarity of the Current Regulations - Trends Regarding the Efficiency

Authors: Corneliu-Ioan Dimitriu, Gheorghe Frățilă

Abstract:

Romania and Bulgaria have high rates of road deaths per million inhabitants. Directive (EU) 2019/1936, known as the RISM Directive, has been transposed into national law by each Member State. The research focuses on the amendments made to Romanian legislation through Government Ordinance no. 3/2022, which aims to improve road safety management on infrastructure. The aim of the research is two-fold: to sensitize the Romanian Government and decision-making entities to develop an integrated and competitive management system and to establish a safe and proactive mobility system that ensures efficient and safe roads. The research includes a critical analysis of European and Romanian legislation, as well as subsequent normative acts related to road infrastructure safety management. Public data from European Union and national authorities, as well as data from the Romanian Road Authority-ARR and Traffic Police database, are utilized. The research methodology involves comparative analysis, criterion analysis, SWOT analysis, and the use of GANTT and WBS diagrams. The Excel tool is employed to process the road accident databases of Romania and Bulgaria. Collaboration with Bulgarian specialists is established to identify common road infrastructure safety issues. The research concludes that the legislative changes have resulted in a relaxation of road safety management in Romania, leading to decreased control over certain management procedures. The amendments to primary and secondary legislation do not meet the current safety requirements for road infrastructure. The research highlights the need for legislative changes and strengthened administrative capacity to enhance road safety. Regional cooperation and the exchange of best practices are emphasized for effective road infrastructure safety management. The research contributes to the theoretical understanding of road infrastructure safety management by analyzing legislative changes and their impact on safety measures. It highlights the importance of an integrated and proactive approach in reducing road accidents and achieving the "zero deaths" objective set by the European Union. Data collection involves accessing public data from relevant authorities and using information from the Romanian Road Authority-ARR and Traffic Police database. Analysis procedures include critical analysis of legislation, comparative analysis of transpositions, criterion analysis, and the use of various diagrams and tools such as SWOT, GANTT, WBS, and Excel. The research addresses the effectiveness of legislative changes in road infrastructure safety management in Romania and the impact on control over management procedures. It also explores the need for strengthened administrative capacity and regional cooperation in addressing road safety issues. The research concludes that the legislative changes made in Romania have not strengthened road safety management and emphasize the need for immediate action, legislative amendments, and enhanced administrative capacity. Collaboration with Bulgarian specialists and the exchange of best practices are recommended for effective road infrastructure safety management. The research contributes to the theoretical understanding of road safety management and provides valuable insights for policymakers and decision-makers in Romania.

Keywords: management, road infrastructure safety, legislation, amendments, collaboration

Procedia PDF Downloads 84
1145 Study on Co-Relation of Prostate Specific Antigen with Metastatic Bone Disease in Prostate Cancer on Skeletal Scintigraphy

Authors: Muhammad Waleed Asfandyar, Akhtar Ahmed, Syed Adib-ul-Hasan Rizvi

Abstract:

Objective: To evaluate the ability of serum concentration of prostate specific antigen between two cutting points considering it as a predictor of skeletal metastasis on bone scintigraphy in men with prostate cancer. Settings: This study was carried out in department of Nuclear Medicine at Sindh Institute of Urology and Transplantation (SIUT) Karachi, Pakistan. Materials and Method: From August 2013 to November 2013, forty two (42) consecutive patients with prostate cancer who underwent technetium-99m methylene diphosphonate (Tc-99mMDP) whole body bone scintigraphy were prospectively analyzed. The information was collected from the scintigraphic database at a Nuclear medicine department Sindh institute of urology and transplantation Karachi Pakistan. Patients who did not have a serum PSA concentration available within 1 month before or after the time of performing the Tc-99m MDP whole body bone scintigraphy were excluded from this study. A whole body bone scintigraphy scan (from the toes to top of the head) was performed using a whole-body Moving gamma camera technique (anterior and posterior) 2–4 hours after intravenous injection of 20 mCi of Tc-99m MDP. In addition, all patients necessarily have a pathological report available. Bony metastases were determined from the bone scan studies and no further correlation with histopathology or other imaging modalities were performed. To preserve patient confidentiality, direct patient identifiers were not collected. In all the patients, Prostate specific antigen values and skeletal scintigraphy were evaluated. Results: The mean age, mean PSA, and incidence of bone metastasis on bone scintigraphy were 68.35 years, 370.51 ng/mL and 19/42 (45.23%) respectively. According to PSA levels, patients were divided into 5 groups < 10ng/mL (10/42), 10-20 ng/mL (5/42), 20-50 ng/mL (2/42), 50-100 (3/42), 100- 500ng/mL (3/42) and more than 500ng/mL (0/42) presenting negative bone scan. The incidence of positive bone scan (%) for bone metastasis for each group were O1 patient (5.26%), 0%, 03 patients (15.78%), 01 patient (5.26%), 04 patients (21.05%), and 10 patients (52.63%) respectively. From the 42 patients 19 (45.23%) presented positive scintigraphic examination for the presence of bone metastasis. 1 patient presented bone metastasis on bone scintigraphy having PSA level less than 10ng/mL, and in only 1 patient (5.26%) with bone metastasis PSA concentration was less than 20 ng/mL. therefore, when the cutting point adopted for PSA serum concentration was 10ng/mL, a negative predictive value for bone metastasis was 95% with sensitivity rates 94.74% and the positive predictive value and specificities of the method were 56.53% and 43.48% respectively. When the cutting point of PSA serum concentration was 20ng/mL the observed results for Positive predictive value and specificity were (78.27% and 65.22% respectively) whereas negative predictive value and sensitivity stood (100% and 95%) respectively. Conclusion: Results of our study allow us to conclude that serum PSA concentration of higher than 20ng/mL was the most accurate cutting point than a serum concentration of PSA higher than 10ng/mL to predict metastasis in radionuclide bone scintigraphy. In this way, unnecessary cost can be avoided, since a considerable part of prostate adenocarcinomas present low serum PSA levels less than 20 ng/mL and for these cases radionuclide bone scintigraphy could be unnecessary.

Keywords: bone scan, cut off value, prostate specific antigen value, scintigraphy

Procedia PDF Downloads 319
1144 Carbon Pool Assessment in Community Forests, Nepal

Authors: Medani Prasad Rijal

Abstract:

Forest itself is a factory as well as product. It supplies tangible and intangible goods and services. It supplies timber, fuel wood, fodder, grass leaf litter as well as non timber edible goods and medicinal and aromatic products additionally provides environmental services. These environmental services are of local, national or even global importance. In Nepal, more than 19 thousands community forests are providing environmental service in less economic benefit than actual efficiency. There is a risk of cost of management of those forest exceeds benefits and forests get converted to open access resources in future. Most of the environmental goods and services do not have markets which mean no prices at which they are available to the consumers, therefore the valuation of these services goods and services establishment of paying mechanism for such services and insure the benefit to community is more relevant in local as well as global scale. There are few examples of carbon trading in domestic level to meet the country wide emission goal. In this contest, the study aims to explore the public attitude towards carbon offsetting and their responsibility over service providers. This study helps in promotion of environment service awareness among general people, service provider and community forest. The research helps to unveil the carbon pool scenario in community forest and willingness to pay for carbon offsetting of people who are consuming more energy than general people and emitting relatively more carbon in atmosphere. The study has assessed the carbon pool status in two community forest and valuated carbon service from community forest through willingness to pay in Dharan municipality situated in eastern. In the study, in two community forests carbon pools were assessed following the guideline “Forest Carbon Inventory Guideline 2010” prescribed by Ministry of Forest and soil Conservation, Nepal. Final outcomes of analysis in intensively managed area of Hokse CF recorded as 103.58 tons C /ha with 6173.30 tons carbon stock. Similarly in Hariyali CF carbon density was recorded 251.72 mg C /ha. The total carbon stock of intensively managed blocks in Hariyali CF is 35839.62 tons carbon.

Keywords: carbon, offsetting, sequestration, valuation, willingness to pay

Procedia PDF Downloads 355
1143 Diagnosis of Intermittent High Vibration Peaks in Industrial Gas Turbine Using Advanced Vibrations Analysis

Authors: Abubakar Rashid, Muhammad Saad, Faheem Ahmed

Abstract:

This paper provides a comprehensive study pertaining to diagnosis of intermittent high vibrations on an industrial gas turbine using detailed vibrations analysis, followed by its rectification. Engro Polymer & Chemicals Limited, a Chlor-Vinyl complex located in Pakistan has a captive combined cycle power plant having two 28 MW gas turbines (make Hitachi) & one 15 MW steam turbine. In 2018, the organization faced an issue of high vibrations on one of the gas turbines. These high vibration peaks appeared intermittently on both compressor’s drive end (DE) & turbine’s non-drive end (NDE) bearing. The amplitude of high vibration peaks was between 150-170% on the DE bearing & 200-300% on the NDE bearing from baseline values. In one of these episodes, the gas turbine got tripped on “High Vibrations Trip” logic actuated at 155µm. Limited instrumentation is available on the machine, which is monitored with GE Bently Nevada 3300 system having two proximity probes installed at Turbine NDE, Compressor DE &at Generator DE & NDE bearings. Machine’s transient ramp-up & steady state data was collected using ADRE SXP & DSPI 408. Since only 01 key phasor is installed at Turbine high speed shaft, a derived drive key phasor was configured in ADRE to obtain low speed shaft rpm required for data analysis. By analyzing the Bode plots, Shaft center line plot, Polar plot & orbit plots; rubbing was evident on Turbine’s NDE along with increased bearing clearance of Turbine’s NDE radial bearing. The subject bearing was then inspected & heavy deposition of carbonized coke was found on the labyrinth seals of bearing housing with clear rubbing marks on shaft & housing covering at 20-25 degrees on the inner radius of labyrinth seals. The collected coke sample was tested in laboratory & found to be the residue of lube oil in the bearing housing. After detailed inspection & cleaning of shaft journal area & bearing housing, new radial bearing was installed. Before assembling the bearing housing, cleaning of bearing cooling & sealing air lines was also carried out as inadequate flow of cooling & sealing air can accelerate coke formation in bearing housing. The machine was then taken back online & data was collected again using ADRE SXP & DSPI 408 for health analysis. The vibrations were found in acceptable zone as per ISO standard 7919-3 while all other parameters were also within vendor defined range. As a learning from subject case, revised operating & maintenance regime has also been proposed to enhance machine’s reliability.

Keywords: ADRE, bearing, gas turbine, GE Bently Nevada, Hitachi, vibration

Procedia PDF Downloads 146
1142 The Role of Interest Groups in Foreign Policy: Assessing the Influence of the 'Pro-Jakarta Lobby' in Australia and Indonesia's Bilateral Relations

Authors: Bec Strating

Abstract:

This paper examines the ways that domestic politics and pressure–generated through lobbying, public diplomacy campaigns and other tools of soft power-contributes to the formation of short-term and long-term national interests, priorities and strategies of states in their international relations. It primarily addresses the conceptual problems regarding the kinds of influence that lobby groups wield in foreign policy and how this influence might be assessed. Scholarly attention has been paid to influential foreign policy lobbies and interest groups, particularly in the areas of US foreign policy. Less attention has been paid to how lobby groups might influence the foreign policy of a middle power such as Australia. This paper examines some of the methodological complexities in developing and conducting a research project that can measure the nature and influence of lobbies on foreign affairs priorities and activities. This paper will use Australian foreign policy in the context of its historical bilateral relationship with Indonesia as a case study for considering the broader issues of domestic influences on foreign policy. Specifically, this paper will use the so-called ‘pro-Jakarta lobby’ as an example of an interest group. The term ‘pro-Jakarta lobby’ is used in media commentary and scholarship to describe an amorphous collection of individuals who have sought to influence Australian foreign policy in favour of Indonesia. The term was originally applied to a group of Indonesian experts at the Australian National University in the 1980s but expanded to include journalists, think tanks and key diplomats. The concept of the ‘pro-Jakarta lobby’ was developed largely through criticisms of Australia’s support for Indonesia’s sovereignty of East Timor and West Papua. Pro-Independence supporters were integral for creating the ‘lobby’ in their rhetoric and criticisms about the influence on Australian foreign policy. In these critical narratives, the ‘pro-Jakarta lobby’ supported a realist approach to relations with Indonesia during the years of President Suharto’s regime, which saw appeasement of Indonesia as paramount to values of democracy and human rights. The lobby was viewed as integral in embedding a form of ‘foreign policy exceptionalism’ towards Indonesia in Australian policy-making circles. However, little critical and scholarly attention has been paid to nature, aims, strategies and activities of the ‘pro-Jakarta lobby.' This paper engages with methodological issues of foreign policy analysis: what was the ‘pro-Jakarta lobby’? Why was it considered more successful than other activist groups in shaping policy? And how can its influence on Australia’s approach to Indonesia be tested in relation to other contingent factors shaping policy? In addressing these questions, this case study will assist in addressing a broader scholarly concern about the capacities of collectives or individuals in shaping and directing the foreign policies of states.

Keywords: foreign policy, interests groups, Australia, Indonesia

Procedia PDF Downloads 343
1141 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index

Authors: Alejandro Cittadino, David Allende

Abstract:

Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.

Keywords: landfill, leachate, surface water, water quality index

Procedia PDF Downloads 151
1140 Experimental and Theoratical Methods to Increase Core Damping for Sandwitch Cantilever Beam

Authors: Iyd Eqqab Maree, Moouyad Ibrahim Abbood

Abstract:

The purpose behind this study is to predict damping effect for steel cantilever beam by using two methods of passive viscoelastic constrained layer damping. First method is Matlab Program, this method depend on the Ross, Kerwin and Unger (RKU) model for passive viscoelastic damping. Second method is experimental lab (frequency domain method), in this method used the half-power bandwidth method and can be used to determine the system loss factors for damped steel cantilever beam. The RKU method has been applied to a cantilever beam because beam is a major part of a structure and this prediction may further leads to utilize for different kinds of structural application according to design requirements in many industries. In this method of damping a simple cantilever beam is treated by making sandwich structure to make the beam damp, and this is usually done by using viscoelastic material as a core to ensure the damping effect. The use of viscoelastic layers constrained between elastic layers is known to be effective for damping of flexural vibrations of structures over a wide range of frequencies. The energy dissipated in these arrangements is due to shear deformation in the viscoelastic layers, which occurs due to flexural vibration of the structures. The theory of dynamic stability of elastic systems deals with the study of vibrations induced by pulsating loads that are parametric with respect to certain forms of deformation. There is a very good agreement of the experimental results with the theoretical findings. The main ideas of this thesis are to find the transition region for damped steel cantilever beam (4mm and 8mm thickness) from experimental lab and theoretical prediction (Matlab R2011a). Experimentally and theoretically proved that the transition region for two specimens occurs at modal frequency between mode 1 and mode 2, which give the best damping, maximum loss factor and maximum damping ratio, thus this type of viscoelastic material core (3M468) is very appropriate to use in automotive industry and in any mechanical application has modal frequency eventuate between mode 1 and mode 2.

Keywords: 3M-468 material core, loss factor and frequency, domain method, bioinformatics, biomedicine, MATLAB

Procedia PDF Downloads 271
1139 Quantitative Analysis of Three Sustainability Pillars for Water Tradeoff Projects in Amazon

Authors: Taha Anjamrooz, Sareh Rajabi, Hasan Mahmmud, Ghassan Abulebdeh

Abstract:

Water availability, as well as water demand, are not uniformly distributed in time and space. Numerous extra-large water diversion projects are launched in Amazon to alleviate water scarcities. This research utilizes statistical analysis to examine the temporal and spatial features of 40 extra-large water diversion projects in Amazon. Using a network analysis method, the correlation between seven major basins is measured, while the impact analysis method is employed to explore the associated economic, environmental, and social impacts. The study unearths that the development of water diversion in Amazon has witnessed four stages, from a preliminary or initial period to a phase of rapid development. It is observed that the length of water diversion channels and the quantity of water transferred have amplified significantly in the past five decades. As of 2015, in Amazon, more than 75 billion m³ of water was transferred amidst 12,000 km long channels. These projects extend over half of the Amazon Area. The River Basin E is currently the most significant source of transferred water. Through inter-basin water diversions, Amazon gains the opportunity to enhance the Gross Domestic Product (GDP) by 5%. Nevertheless, the construction costs exceed 70 billion US dollars, which is higher than any other country. The average cost of transferred water per unit has amplified with time and scale but reduced from western to eastern Amazon. Additionally, annual total energy consumption for pumping exceeded 40 billion kilowatt-hours, while the associated greenhouse gas emissions are assessed to be 35 million tons. Noteworthy to comprehend that ecological problems initiated by water diversion influence the River Basin B and River Basin D. Due to water diversion, more than 350 thousand individuals have been relocated, away from their homes. In order to enhance water diversion sustainability, four categories of innovative measures are provided for decision-makers: development of water tradeoff projects strategies, improvement of integrated water resource management, the formation of water-saving inducements, and pricing approach, and application of ex-post assessment.

Keywords: sustainability, water trade-off projects, environment, Amazon

Procedia PDF Downloads 129
1138 Unveiling the Potential of MoSe₂ for Toxic Gas Sensing: Insights from Density Functional Theory and Non-equilibrium Green’s Function Calculations

Authors: Si-Jie Ji, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

With the rapid development of industrialization and urbanization, air pollution poses significant global environmental challenges, contributing to acid rain, global warming, and adverse health effects. Therefore, it is necessary to monitor the concentration of toxic gases in the atmospheric environment in real-time and to deploy cost-effective gas sensors capable of detecting their emissions. In this study, we systematically investigated the sensing capabilities of the two-dimensional MoSe₂ for seven key environmental gases (NO, NO₂, CO, CO₂, SO₂, SO₃, and O₂) using density functional theory (DFT) and non-equilibrium Green’s function (NEGF) calculations. We also investigated the impact of H₂O as an interfering gas. Our results indicate that the MoSe₂ monolayer is thermodynamically stable and exhibits strong gas-sensing capabilities. The calculated adsorption energies indicate that these gases can stably adsorb on MoSe₂, with SO₃ exhibiting the strongest adsorption energy (-0.63 eV). Electronic structure analysis, including projected density of states (PDOS) and Bader charge analysis, demonstrates significant changes in the electronic properties of MoSe₂ upon gas adsorption, affecting its conductivity and sensing performance. We find that oxygen (O₂) adsorption notably influenced the deformation of MoSe₂. To comprehensively understand the potential of MoSe₂ as a gas sensor, we used the NEGF method to assess the electronic transport properties of MoSe₂ under gas adsorption, evaluating current-voltage (I-V), resistance-voltage (R-V) characteristics, and transmission spectra to determine sensitivity, selectivity, and recovery time compared to pristine MoSe₂. Sensitivity, selectivity, and recovery time are analyzed at a bias voltage of 1.7V, showing excellent performance of MoSe₂ in detecting SO₃, among other gases. The pronounced changes in electronic transport behavior induced by SO₃ adsorption confirm MoSe₂’s strong potential as a high-performance gas-sensing material. Overall, this theoretical study provides new insights into the development of high-performance gas sensors, demonstrating the potential of MoSe₂ as a gas-sensing material, particularly for gases like SO₃.

Keywords: density functional theory, gas sensing, MoSe₂, non-equilibrium Green’s function, SO

Procedia PDF Downloads 21
1137 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 96