Search results for: systems quality
2242 Socioeconomic Disparities in the Prevalence of Obesity in Adults with Diabetes in Israel
Authors: Yael Wolff Sagy, Yiska Loewenberg Weisband, Vered Kaufman Shriqui, Michal Krieger, Arie Ben Yehuda, Ronit Calderon Margalit
Abstract:
Background: Obesity is both a risk factor and common comorbidity of diabetes. Obesity impedes the achievement of glycemic control, and enhances damage caused by hyperglycemia to blood vessels; thus it increases diabetes-related complications. This study assessed the prevalence of obesity and morbid obesity among Israeli adults with diabetes, and estimated disparities associated with sex and socioeconomic position (SEP). Methods: A cross-sectional study was conducted in the setting of the Israeli National Program for Quality Indicators in Community Healthcare. Data on all the Israeli population is retrieved from electronic medical records of the four health maintenance organizations (HMOs). The study population included all Israeli patients with diabetes aged 20-64 with documented body mass index (BMI) in 2016 (N=180,451). Diabetes was defined as the existence of one or more of the following criteria: (a) Plasma glucose level >200 mg% in at least two tests conducted at least one month apart in the previous year; (b) HbA1c>6.5% at least once in the previous year (c) at least three prescriptions of diabetes medications were dispensed during the previous year. Two measures were included: the prevalence of obesity (defined as last BMI≥ 30 kg/m2 and <35 kg/m2) and the prevalence of morbid obesity (defined as last BMI≥ 35 kg/m2) in individuals aged 20-64 with diabetes. The cut-off value for morbid obesity was set in accordance with the eligibility criteria for bariatric surgery in diabetics. Data were collected by the HMOs and aggregated by age, sex and SEP. SEP was based on statistical areas ranking by the Israeli Central Bureau of Statistics and divided into 4 categories, ranking from 1 (lowest) to 4 (highest). Results: BMI documentation among adults with diabetes was 84.9% in 2016. The prevalence of obesity in the study population was 30.5%. Although the overall rate was similar in both sexes (30.8% in females, 30.3% in males), SEP disparities were stronger in females (32.7% in SEP level 1 vs. 27.7% in SEP level 4; 18.1% relative difference) compared to males (30.6% in SEP level 1 vs. 29.3% in SEP level 4; 4.4% relative difference). The overall prevalence of morbid obesity in this population was 20.8% in 2016. The rate among females was almost double compared to the rate in males (28.1% and 14.6%, respectively). In both sexes, the prevalence of morbid obesity was strongly associated with lower SEP. However, in females, disparities between SEP levels were much stronger (34.3% in SEP level 1 vs. 18.7% in SEP level 4; 83.4% relative difference) compared to SEP-disparities in males (15.7% in SEP level 1 vs. 12.3% in SEP level 4; 27.6% relative difference). Conclusions: The overall prevalence of BMI≥ 30 kg/m2 among adults with diabetes in Israel exceeds 50%; and the prevalence of morbid obesity suggests that 20% meet the BMI-criteria for bariatric surgery. Prevalence rates show major SEP- and sex-disparities; especially strong SEP disparities in morbid obesity among females. These findings highlight the need for greater consideration of different population groups when implementing interventions.Keywords: diabetes, health disparities, health policy, obesity, socio-economic position
Procedia PDF Downloads 2122241 Disability Management and Occupational Health Enhancement Program in Hong Kong Hospital Settings
Authors: K. C. M. Wong, C. P. Y. Cheng, K. Y. Chan, G. S. C. Fung, T. F. O. Lau, K. F. C. Leung, J. P. C. Fok
Abstract:
Hospital Authority (HA) is the statutory body to manage all public hospitals in Hong Kong. Occupational Care Medicine Service (OMCS) is an in-house multi-disciplinary team responsible for injury management in HA. Hospital administrative services (AS) provides essential support in hospital daily operation to facilitate the provision of quality healthcare services. An occupational health enhancement program in Tai Po Hospital (TPH) domestic service supporting unit (DSSU) was piloted in 2013 with satisfactory outcome, the keys to success were staff engagement and management support. Riding on the success, the program was rolled out to another 5 AS departments of Alice Ho Miu Ling Nethersole Hospital (AHNH) and TPH in 2015. This paper highlights the indispensable components of disability management and occupational health enhancement program in hospital settings. Objectives: 1) Facilitate workplace to support staff with health affecting work problem, 2) Enhance staff’s occupational health. Methodology: Hospital Occupational Safety and Health (OSH) team and AS departments (catering, linen services, and DSSU) of AHNH and TPH worked closely with OMCS. Focus group meetings and worksite visits were conducted with frontline staff engagement. OSH hazards were identified with corresponding OSH improvement measures introduced, e.g., invention of high dusting device to minimize working at height; tailor-made linen cart to minimize back bending at work, etc. Specific MHO trainings were offered to each AS department. A disability management workshop was provided to supervisors in order to enhance their knowledge and skills in return-to-work (RTW) facilitation. Based on injured staff's health condition, OMCS would provide work recommendation, and RTW plan was formulated with engagement of staff and their supervisors. Genuine communication among stakeholders with expectation management paved the way for realistic goals setting and success in our program. Outcome: After implementation of the program, a significant drop of 26% in musculoskeletal disorders related sickness absence day was noted in 2016 as compared to the average of 2013-2015. The improvement was postulated by innovative OSH improvement measures, teamwork, staff engagement and management support. Staff and supervisors’ feedback were very encouraging that 90% respondents rated very satisfactory in program evaluation. This program exemplified good work sharing among departments to support staff in need.Keywords: disability management, occupational health, return to work, occupational medicine
Procedia PDF Downloads 2102240 Extraction of Phycocyanin from Spirulina platensis by Isoelectric Point Precipitation and Salting Out for Scale Up Processes
Authors: Velasco-Rendón María Del Carmen, Cuéllar-Bermúdez Sara Paulina, Parra-Saldívar Roberto
Abstract:
Phycocyanin is a blue pigment protein with fluorescent activity produced by cyanobacteria. It has been recently studied to determine its anticancer, antioxidant and antiinflamatory potential. Since 2014 it was approved as a Generally Recognized As Safe (GRAS) proteic pigment for the food industry. Therefore, phycocyanin shows potential for the food, nutraceutical, pharmaceutical and diagnostics industry. Conventional phycocyanin extraction includes buffer solutions and ammonium sulphate followed by chromatography or ATPS for protein separation. Therefore, further purification steps are time-requiring, energy intensive and not suitable for scale-up processing. This work presents an alternative to conventional methods that also allows large scale application with commercially available equipment. The extraction was performed by exposing the dry biomass to mechanical cavitation and salting out with NaCl to use an edible reagent. Also, isoelectric point precipitation was used by addition of HCl and neutralization with NaOH. The results were measured and compared in phycocyanin concentration, purity and extraction yield. Results showed that the best extraction condition was the extraction by salting out with 0.20 M NaCl after 30 minutes cavitation, with a concentration in the supernatant of 2.22 mg/ml, a purity of 3.28 and recovery from crude extract of 81.27%. Mechanical cavitation presumably increased the solvent-biomass contact, making the crude extract visibly dark blue after centrifugation. Compared to other systems, our process has less purification steps, similar concentrations in the phycocyanin-rich fraction and higher purity. The contaminants present in our process edible NaCl or low pHs that can be neutralized. It also can be adapted to a semi-continuous process with commercially available equipment. This characteristics make this process an appealing alternative for phycocyanin extraction as a pigment for the food industry.Keywords: extraction, phycocyanin, precipitation, scale-up
Procedia PDF Downloads 4382239 Ultrasonic Agglomeration of Protein Matrices and Its Effect on Thermophysical, Macro- and Microstructural Properties
Authors: Daniela Rivera-Tobar Mario Perez-Won, Roberto Lemus-Mondaca, Gipsy Tabilo-Munizaga
Abstract:
Different dietary trends worldwide seek to consume foods with anti-inflammatory properties, rich in antioxidants, proteins, and unsaturated fatty acids that lead to better metabolic, intestinal, mental, and cardiac health. In this sense, food matrices with high protein content based on macro and microalgae are an excellent alternative to meet the new needs of consumers. An emerging and environmentally friendly technology for producing protein matrices is ultrasonic agglomeration. It consists of the formation of permanent bonds between particles, improving the agglomeration of the matrix compared to conventionally agglomerated products (compression). Among the advantages of this process are the reduction of nutrient loss and the avoidance of binding agents. The objective of this research was to optimize the ultrasonic agglomeration process in matrices composed of Spirulina (Arthrospira platensis) powder and Cochayuyo (Durvillae Antartica) flour, by means of the response variable (Young's modulus) and the independent variables were the process conditions (percentage of ultrasonic amplitude: 70, 80 and 90; ultrasonic agglomeration times and cycles: 20, 25 and 30 seconds, and 3, 4 and 5). It was evaluated using a central composite design and analyzed using response surface methodology. In addition, the effects of agglomeration on thermophysical and microstructural properties were evaluated. It was determined that ultrasonic compression with 80 and 90% amplitude caused conformational changes according to Fourier infrared spectroscopy (FTIR) analysis, the best condition with respect to observed microstructure images (SEM) and differential scanning calorimetry (DSC) analysis, was the condition of 90% amplitude 25 and 30 seconds with 3 and 4 cycles of ultrasound. In conclusion, the agglomerated matrices present good macro and microstructural properties which would allow the design of food systems with better nutritional and functional properties.Keywords: ultrasonic agglomeration, physical properties of food, protein matrices, macro and microalgae
Procedia PDF Downloads 592238 Association of Copy Number Variation of the CHKB, KLF6, GPC1, and CHRM3 Genes with Growth Traits of Datong Yak (Bos grunniens)
Authors: Habtamu Abera Goshu, Ping Yan
Abstract:
Copy number variation (CNV) is a significant marker of the genetic and phenotypic diversity among individuals that accounts for complex quantitative traits of phenotype and diseases via modulating gene dosage, position effects, alteration of downstream pathways, modification of chromosome structure, and position within the nucleus and disrupting coding regions in the genome. Associating copy number variations (CNVs) with growth and gene expression are a powerful approach for identifying genomic characteristics that contribute to phenotypic and genotypic variation. A previous study using next-generation sequencing illustrated that the choline kinase beta (CHKB), Krüpple-like factor 6 (KLF6), glypican 1(GPC1), and cholinergic receptor muscarinic 3 (CHRM3) genes reside within copy number variable regions (CNVRs) of yak populations that overlap with quantitative trait loci (QTLs) of meat quality and growth. As a result, this research aimed to determine the association of CNVs of the KLF6, CHKB, GPC1, and CHRM3 genes with growth traits in the Datong yak breed. The association between the CNV types of the KLF6, CHKB, GPC1, and CHRM3 genes and the growth traits in the Datong yak breed was determined by one-way analysis of variance (ANOVA) using SPSS software. The CNV types were classified as a loss (a copy number of 0 or 1), gain (a copy number >2), and normal (a copy number of 2) relative to the reference gene, BTF3 in the 387 individuals of Datong yak. These results indicated that the normal CNV types of the CHKB and GPC1 genes were significantly (P<0.05) associated with high body length, height and weight, and chest girth in six-month-old and five-year-old Datong yaks. On the other hand, the loss CNV types of the KLF6 gene is significantly (P<0.05) associated with body weight and length and chest girth at six-month-old and five-year-old Datong yaks. In the contrary, the gain CNV type of the CHRM3 gene is highly (P<0.05) associated with body weight, length, height, and chest girth in six-month-old and five-year-old. This work provides the first observation of the biological role of CNVs of the CHKB, KLF6, GPC1, and CHRM3 genes in the Datong yak breed and might, therefore, provide a novel opportunity to utilize data on CNVs in designing molecular markers for the selection of animal breeding programs for larger populations of various yak breeds. Therefore, we hypothesized that this study provided inclusive information on the application of CNVs of the CHKB, KLF6, GPC1, and CHRM3 genes in growth traits in Datong yaks and its possible function in bovine species.Keywords: Copy number variation, growth traits, yak, genes
Procedia PDF Downloads 1702237 Increase of the Nanofiber Degradation Rate Using PCL-PEO and PCL-PVP as a Shell in the Electrospun Core-Shell Nanofibers Using the Needleless Blades
Authors: Matej Buzgo, Erico Himawan, Ksenija JašIna, Aiva Simaite
Abstract:
Electrospinning is a versatile and efficient technology for producing nanofibers for biomedical applications. One of the most common polymers used for the preparation of nanofibers for regenerative medicine and drug delivery applications is polycaprolactone (PCL). PCL is a biocompatible and bioabsorbable material that can be used to stimulate the regeneration of various tissues. It is also a common material used for the development of drug delivery systems by blending the polymer with small active molecules. However, for many drug delivery applications, e.g. cancer immunotherapy, PCL biodegradation rate that may exceed 9 months is too long, and faster nanofiber dissolution is needed. In this paper, we investigate the dissolution and small molecule release rates of PCL blends with two hydrophilic polymers: polyethylene oxide (PEO) or polyvinylpyrrolidone (PVP). We show that adding hydrophilic polymer to the PCL reduces the water contact angle, increases the dissolution rate, and strengthens the interactions between the hydrophilic drug and polymer matrix that further sustain its release. Finally using this method, we were also able to increase the nanofiber degradation rate when PCL-PEO and PCL-PVP were used as a shell in the electrospun core-shell nanofibers and spread up the release of active proteins from their core. Electrospinning can be used for the preparation of the core-shell nanofibers, where active ingredients are encapsulated in the core and their release rate is regulated by the shell. However, such fibers are usually prepared by coaxial electrospinning that is an extremely low-throughput technique. An alternative is emulsion electrospinning that could be upscaled using needleless blades. In this work, we investigate the possibility of using emulsion electrospinning for encapsulation and sustained release of the growth factors for the development of the organotypic skin models. The core-shell nanofibers were prepared using the optimized formulation and the release rate of proteins from the fibers was investigated for 2 weeks – typical cell culture conditions.Keywords: electrospinning, polycaprolactone (PCL), polyethylene oxide (PEO), polyvinylpyrrolidone (PVP)
Procedia PDF Downloads 2722236 A Strategy for Reducing Dynamic Disorder in Small Molecule Organic Semiconductors by Suppressing Large Amplitude Thermal Motions
Authors: Steffen Illig, Alexander S. Eggeman, Alessandro Troisi, Stephen G. Yeates, John E. Anthony, Henning Sirringhaus
Abstract:
Large-amplitude intermolecular vibrations in combination with complex shaped transfer integrals generate a thermally fluctuating energetic landscape. The resulting dynamic disorder and its intrinsic presence in organic semiconductors is one of the most fundamental differences to their inorganic counterparts. Dynamic disorder is believed to govern many of the unique electrical and optical properties of organic systems. However, the low energy nature of these vibrations makes it difficult to access them experimentally and because of this we still lack clear molecular design rules to control and reduce dynamic disorder. Applying a novel technique based on electron diffraction we encountered strong intermolecular, thermal vibrations in every single organic material we studied (14 up to date), indicating that a large degree of dynamic disorder is a universal phenomenon in organic crystals. In this paper a new molecular design strategy will be presented to avoid dynamic disorder. We found that small molecules that have their side chains attached to the long axis of their conjugated core have been found to be less likely to suffer from dynamic disorder effects. In particular, we demonstrate that 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothio-phene (C8-BTBT) and 2,9-di-decyl-dinaphtho-[2,3-b:20,30-f]-thieno-[3,2-b]-thiophene (C10DNTT) exhibit strongly reduced thermal vibrations in comparison to other molecules and relate their outstanding performance to their lower dynamic disorder. We rationalize the low degree of dynamic disorder in C8-BTBT and C10-DNTT with a better encapsulation of the conjugated cores in the crystal structure which helps reduce large amplitude thermal motions. The work presented in this paper provides a general strategy for the design of new classes of very high mobility organic semiconductors with low dynamic disorder.Keywords: charge transport, C8-BTBT, C10-DNTT, dynamic disorder, organic semiconductors, thermal vibrations
Procedia PDF Downloads 3982235 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 702234 Development and Testing of Health Literacy Scales for Chinese Primary and Secondary School Students
Authors: Jiayue Guo, Lili You
Abstract:
Background: Children and adolescent health are crucial for both personal well-being and the nation's future health landscape. Health Literacy (HL) is important in enabling adolescents to self-manage their health, a fundamental step towards health empowerment. However, there are limited tools for assessing HL among elementary and junior high school students. This study aims to construct and validate a test-based HL scale for Chinese students, offering a scientific reference for cross-cultural HL tool development. Methods: We conducted a cross-sectional online survey. Participants were recruited from a stratified cluster random sampling method, a total of 4189 Chinese in-school primary and secondary students. The development of the scale was completed by defining the concept of HL, establishing the item indicator system, screening items (7 health content dimensions), and evaluating reliability and validity. Delphi method expert consultation was used to screen items, the Rasch model was conducted for quality analysis, and Cronbach’s alpha coefficient was used to examine the internal consistency. Results: We developed four versions of the HL scale, each with a total score of 100, encompassing seven key health areas: hygiene, nutrition, physical activity, mental health, disease prevention, safety awareness, and digital health literacy. Each version measures four dimensions of health competencies: knowledge, skills, motivation, and behavior. After the second round of expert consultation, the average importance score of each item by experts is 4.5–5.0, and the coefficient of variation is 0.000–0.174. The knowledge and skills dimensions are judgment-based and multiple-choice questions, with the Rasch model confirming unidimensionality at a 5.7% residual variance. The behavioral and motivational dimensions, measured with scale-type items, demonstrated internal consistency via Cronbach's alpha and strong inter-item correlation with KMO values of 0.924 and 0.787, respectively. Bartlett's test of sphericity, with p-values <0.001, further substantiates the scale's reliability. Conclusions: The new test-based scale, designed to evaluate competencies within a multifaceted framework, aligns with current international adolescent literacy theories and China's health education policies, focusing not only on knowledge acquisition but also on the application of health-related thinking and behaviors. The scale can be used as a comprehensive tool for HL evaluation and a reference for other countries.Keywords: adolescent health, Chinese, health literacy, rasch model, scale development
Procedia PDF Downloads 272233 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France
Authors: Aiman Mazhar Qureshi, Ahmed Rachid
Abstract:
Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation
Procedia PDF Downloads 1472232 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case
Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios
Abstract:
The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system
Procedia PDF Downloads 3772231 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method
Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy
Abstract:
The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method
Procedia PDF Downloads 1392230 Studies on Optimizing the Level of Liquid Biofertilizers in Peanut and Maize and Their Economic Analysis
Authors: Chandragouda R. Patil, K. S. Jagadeesh, S. D. Kalolgi
Abstract:
Biofertilizers containing live microbial cells can mobilize one or more nutrients to plants when applied to either seed or rhizosphere. They form an integral part of nutrient management strategies for sustainable production of agricultural crops. Annually, about 22 tons of lignite-based biofertilizers are being produced and supplied to farmers at the Institute of Organic Farming, University of Agricultural Sciences, Dharwad, Karnataka state India. Although carrier based biofertilizers are common, they have shorter shelf life, poor quality, high contamination, unpredictable field performance and high cost of solid carriers. Hence, liquid formulations are being developed to increase their efficacy and broaden field applicability. An attempt was made to develop liquid formulation of strains of Rhizobium NC-92 (Groundnut), Azospirillum ACD15 both nitrogen-fixing biofertilizers and Pseudomonas striata an efficient P-solubilizing bacteria (PSB). Different concentration of amendments such as additives (glycerol and polyethylene glycol), adjuvants (carboxyl methyl cellulose), gum arabica (GA), surfactant (polysorbate) and trehalose specifically for Azospirillum were found essential. Combinations of formulations of Rhizobium and PSB for groundnut and Azospirillum and PSB for maize were evaluated under field conditions to determine the optimum level of inoculum required. Each biofertilizer strain was inoculated at the rate of 2, 4, 8 ml per kg of seeds and the efficacy of each formulation both individually and in combinations was evaluated against the lignite-based formulation at the rate of 20 g each per kg seeds and a un-inoculated set was included to compare the inoculation effect. The field experiment had 17 treatments in three replicates and the best level of inoculum was decided based on net returns and cost: benefit ratio. In peanut, the combination of 4 ml of Rhizobium and 2 ml of PSB resulted in the highest net returns and higher cost to benefit ratio of 1:2.98 followed by treatment with a combination of 2 ml per kg each of Rhizobium and PSB with a B;C ratio of 1:2.84. The benefits in terms of net returns were to the extent of 16 percent due to inoculation with lignite based formulations while it was up to 48 percent due to the best combination of liquid biofertilizers. In maize combination of liquid formulations consisting of 4 ml of Azospirillum and 2 ml of PSB resulted in the highest net returns; about 53 percent higher than the un-inoculated control and 20 percent higher than the treatment with lignite based formulation. In both the crops inoculation with lignite based formulations significantly increased the net returns over un-inoculated control while levels higher or lesser than 4 ml of Rhizobium and Azospirillum and higher or lesser than 2 ml of PSB were not economical and hence not optimal for these two crops.Keywords: Rhizobium, Azospirillum, phosphate solubilizing bacteria, liquid formulation, benefit-cost ratio
Procedia PDF Downloads 4922229 Study on Reusable, Non Adhesive Silicone Male External Catheter: Clinical Proof of Study and Quality Improvement Project
Authors: Venkata Buddharaju, Irene Mccarron, Hazel Alba
Abstract:
Introduction: Male external catheters (MECs) are commonly used to collect and drain urine. MECs are increasingly used in acute care, long-term acute care hospitals, and nursing facilities, and in other patients as an alternative to invasive urinary catheters to reduce catheter-associated urinary tract infections (CAUTI).MECs are also used to avoid the need for incontinence pads and diapers. Most of the Male External Catheters are held in place by skin adhesive, with the exception of a few, which uses a foam strap clamp around the penile shaft. The adhesive condom catheters typically stay for 24 hours or less. It is also a common practice that extra skin adhesive tape is wrapped around the condom catheter for additional security of the device. The fixed nature of the adhesive will not allow the normal skin expansion of penile size over time. The adhesive can cause skin irritation, redness, erosion, and skin damage. Acanthus condom catheter (ACC) is a patented, specially designed, stretchable silicone catheter without adhesive, adapts to the size and contour of the penis. It is held in place with a single elastic strap that wraps around the lower back and tied to the opposite catheter ring holescriss cross. It can be reused for up to 5 days on the same patient after daily cleaning and washingpotentially reducing cost. Methods: The study was conducted from September 17th to October 8th, 2020. The nursing staff was educated and trained on how to use and reuse the catheter. After identifying five (5) appropriate patients, the catheter was placed and maintained by nursing staff. The data on the ease of use, leak, and skin damage were collected and reported by nurses to the nursing education department of the hospital for analysis. Setting: RML Chicago, long-term acute care hospital, an affiliate of Loyola University Medical Center, Chicago, IL USA. Results: The data showed that the catheter was easy to apply, remove, wash and reuse, without skin problems or urine infections. One patient had used for 16 days after wash, reuse, and replacement without any urine leak or skin issues. A minimal leak was observed on two patients. Conclusion: Acanthus condom catheter was easy to use, functioned well with minimal or no leak during use and reuse. The skin was intact in all patients studied. There were no urinary tract infections in any of the studied patients.Keywords: CAUTI, male external catheter, reusable, skin adhesive
Procedia PDF Downloads 1052228 Systematic Review of Digital Interventions to Reduce the Carbon Footprint of Primary Care
Authors: Anastasia Constantinou, Panayiotis Laouris, Stephen Morris
Abstract:
Background: Climate change has been reported as one of the worst threats to healthcare. The healthcare sector is a significant contributor to greenhouse gas emissions with primary care being responsible for 23% of the NHS’ total carbon footprint. Digital interventions, primarily focusing on telemedicine, offer a route to change. This systematic review aims to quantify and characterize the carbon footprint savings associated with the implementation of digital interventions in the setting of primary care. Methods: A systematic review of published literature was conducted according to PRISMA (Preferred Reporting Item for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, PubMed, and Scopus databases as well as Google scholar were searched using key terms relating to “carbon footprint,” “environmental impact,” “sustainability”, “green care”, “primary care,”, and “general practice,” using citation tracking to identify additional articles. Data was extracted and analyzed in Microsoft Excel. Results: Eight studies were identified conducted in four different countries between 2010 and 2023. Four studies used interventions to address primary care services, three studies focused on the interface between primary and specialist care, and one study addressed both. Digital interventions included the use of mobile applications, online portals, access to electronic medical records, electronic referrals, electronic prescribing, video-consultations and use of autonomous artificial intelligence. Only one study carried out a complete life cycle assessment to determine the carbon footprint of the intervention. It estimate that digital interventions reduced the carbon footprint at primary care level by 5.1 kgCO2/visit, and at the interface with specialist care by 13.4 kg CO₂/visit. When assessing the relationship between travel-distance saved and savings in emissions, we identified a strong correlation, suggesting that most of the carbon footprint reduction is attributed to reduced travel. However, two studies also commented on environmental savings associated with reduced use of paper. Patient savings in the form of reduced fuel cost and reduced travel time were also identified. Conclusion: All studies identified significant reductions in carbon footprint following implementation of digital interventions. In the future, controlled, prospective studies incorporating complete life cycle assessments and accounting for double-consulting effects, use of additional resources, technical failures, quality of care and cost-effectiveness are needed to fully appreciate the sustainable benefit of these interventionsKeywords: carbon footprint, environmental impact, primary care, sustainable healthcare
Procedia PDF Downloads 602227 A Benchtop Experiment to Study Changes in Tracer Distribution in the Subarachnoid Space
Authors: Smruti Mahapatra, Dipankar Biswas, Richard Um, Michael Meggyesy, Riccardo Serra, Noah Gorelick, Steven Marra, Amir Manbachi, Mark G. Luciano
Abstract:
Intracranial pressure (ICP) is profoundly regulated by the effects of cardiac pulsation and the volume of the incoming blood. Furthermore, these effects on ICP are incremented by the presence of a rigid skull that does not allow for changes in total volume during the cardiac cycle. These factors play a pivotal role in cerebrospinal fluid (CSF) dynamics and distribution, with consequences that are not well understood to this date and that may have a deep effect on the Central Nervous System (CNS) functioning. We designed this study with two specific aims: (a) To study how pulsatility influences local CSF flow, and (b) To study how modulating intracranial pressure affects drug distribution throughout the SAS globally. In order to achieve these aims, we built an elaborate in-vitro model of the SAS closely mimicking the dimensions and flow rates of physiological systems. To modulate intracranial pressure, we used an intracranially implanted, cardiac-gated, volume-oscillating balloon (CADENCE device). Commercially available dye was used to visualize changes in CSF flow. We first implemented two control cases, seeing how the tracer behaves in the presence of pulsations from the brain phantom and the balloon individually. After establishing the controls, we tested 2 cases, having the brain and the balloon pulsate together in sync and out of sync. We then analyzed the distribution area using image processing software. The in-sync case produced a significant increase, 5x times, in the tracer distribution area relative to the out-of-sync case. Assuming that the tracer fluid would mimic blood flow movement, a drug introduced in the SAS with such a system in place would enhance drug distribution and increase the bioavailability of therapeutic drugs to a wider spectrum of brain tissue.Keywords: blood-brain barrier, cardiac-gated, cerebrospinal fluid, drug delivery, neurosurgery
Procedia PDF Downloads 1802226 Multimedia Container for Autonomous Car
Authors: Janusz Bobulski, Mariusz Kubanek
Abstract:
The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.Keywords: an autonomous car, image processing, lidar, obstacle detection
Procedia PDF Downloads 2232225 Transient Phenomena in a 100 W Hall Thrusters: Experimental Measurements of Discharge Current and Plasma Parameter Evolution
Authors: Clémence Royer, Stéphane Mazouffre
Abstract:
Nowadays, electric propulsion systems play a crucial role in space exploration missions due to their high specific impulse and long operational life. The Hall thrusters are one of the most mature EP technologies. It is a gridless ion thruster that has proved reliable and high-performance for decades in various space missions. Operation of HT relies on electron emissions through a cathode placed outside a hollow dielectric channel that includes an anode at the back. Negatively charged particles are trapped in a magnetic field and efficiently slow down. By collisions, the electron cloud ionizes xenon atoms. A large electric field is generated in the axial direction due to the low electron transverse mobility in the region of a strong magnetic field. Positive particles are pulled out of the chamber at high velocity and are neutralized directly at the exhaust area. This phenomenon leads to the acceleration of the spacecraft system at a high specific impulse. While HT’s architecture and operating principle are relatively simple, the physics behind thrust is complex and still partly unknown. Current and voltage oscillations, as well as electron properties, have been captured over a 30 mn time period after ignition. The observed low-frequency oscillations exhibited specific frequency ranges, amplitudes, and stability patterns. Correlations between the oscillations and plasma characteristics we analyzed. The impact of these instabilities on thruster performance, including thrust efficiency, has been evaluated as well. Moreover, strategies for mitigating and controlling these instabilities have been developed, such as filtering. In this contribution, in addition to presenting a summary of the results obtained in the transient regime, we will present and discuss recent advances in Hall thruster plasma discharge filtering and control.Keywords: electric propulsion, Hall Thruster, plasma diagnostics, low-frequency oscillations
Procedia PDF Downloads 882224 Investigation of Fumaric Acid Radiolysis Using Gamma Irradiation
Authors: Wafa Jahouach-Rabai, Khouloud Ouerghi, Zohra Azzouz-Berriche, Faouzi Hosni
Abstract:
Widely used organic products in the pharmaceutical industry have been detected in environmental systems, essentially carboxylic acids. In this purpose, the degradation efficiency of these contaminants was evaluated using an advanced oxidation process (AOP), namely ionization process as an alternative to conventional water treatment technologies. This process permitted the generation of radical reactions to directly degrade organic pollutants in wastewater. In fact, gamma irradiation of aqueous solutions produces several reactive radicals, essentially hydroxyl radical (OH), to destroy recalcitrant pollutants. Different concentrations of aqueous solutions of Fumaric acid (FA) were considered in this study (0.1-1 mmol/L), which were treated by irradiation doses from 1 to 15 kGy with 6.1 kGy/h rate by ionizing system in pilot scale (⁶⁰Co irradiator). Variations of main parameters influencing degradation efficiency versus absorbed doses were released in the aim to optimize total mineralization of considered pollutants. Preliminary degradation pathway until complete mineralization into CO₂ has been suggested based on detection of residual degradation derivatives using different techniques, namely high performance liquid chromatography (HPLC) and electron paramagnetic resonance spectroscopy (EPR). Results revealed total destruction of treated compound, which improve the efficiency of this process in water remediation. We investigated the reactivity of hydroxyl radicals generated by irradiation on dicarboxylic acid (FA) in aqueous solutions, leading to its degradation into other smaller molecules. In fact, gamma irradiation of FA leads to the formation of hydroxylated intermediates such as hydroxycarbonyl radical which were identified by EPR spectroscopy. Finally, pilot plant irradiation facilities improved the applicability of radiation technology on large scale.Keywords: AOP, radiolysis, fumaric acid, gamma irradiation, hydroxyl radical, EPR, HPLC
Procedia PDF Downloads 1702223 Railway Ballast Volumes Automated Estimation Based on LiDAR Data
Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert
Abstract:
The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point
Procedia PDF Downloads 1072222 The Lopsided Burden of Non-Communicable Diseases in India: Evidences from the Decade 2004-2014
Authors: Kajori Banerjee, Laxmi Kant Dwivedi
Abstract:
India is a part of the ongoing globalization, contemporary convergence, industrialization and technical advancement that is taking place world-wide. Some of the manifestations of this evolution is rapid demographic, socio-economic, epidemiological and health transition. There has been a considerable increase in non-communicable diseases due to change in lifestyle. This study aims to assess the direction of burden of disease and compare the pressure of infectious diseases against cardio-vascular, endocrine, metabolic and nutritional diseases. The change in prevalence in a ten-year period (2004-2014) is further decomposed to determine the net contribution of various socio-economic and demographic covariates. The present study uses the recent 71st (2014) and 60th (2004) rounds of National Sample Survey. The pressure of infectious diseases against cardio-vascular (CVD), endocrine, metabolic and nutritional (EMN) diseases during 2004-2014 is calculated by Prevalence Rates (PR), Hospitalization Rates (HR) and Case Fatality Rates (CFR). The prevalence of non-communicable diseases are further used as a dependent variable in a logit regression to find the effect of various social, economic and demographic factors on the chances of suffering from the particular disease. Multivariate decomposition technique further assists in determining the net contribution of socio-economic and demographic covariates. This paper upholds evidences of stagnation of the burden of communicable diseases (CD) and rapid increase in the burden of non-communicable diseases (NCD) uniformly for all population sub-groups in India. CFR for CVD has increased drastically in 2004-2014. Logit regression indicates the chances of suffering from CVD and EMN is significantly higher among the urban residents, older ages, females, widowed/ divorced and separated individuals. Decomposition displays ample proof that improvement in quality of life markers like education, urbanization, longevity of life has positively contributed in increasing the NCD prevalence rate. In India’s current epidemiological phase, compression theory of morbidity is in action as a significant rise in the probability of contracting the NCDs over the time period among older ages is observed. Age is found to play a vital contributor in increasing the probability of having CVD and EMN over the study decade 2004-2014 in the nationally representative sample of National Sample Survey.Keywords: cardio-vascular disease, case-fatality rate, communicable diseases, hospitalization rate, multivariate decomposition, non-communicable diseases, prevalence rate
Procedia PDF Downloads 3102221 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion
Authors: Roberta Martino, Viviana Ventre
Abstract:
From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.Keywords: impatience, risk aversion, subjective probability, uncertainty
Procedia PDF Downloads 1062220 An Index to Measure Transportation Sustainable Performance in Construction Projects
Authors: Sareh Rajabi, Taha Anjamrooz, Salwa Bheiry
Abstract:
The continuous increase in the world population, resource shortage and the warning of climate change cause various environmental and social issues to the world. Thus, sustainability concept is much needed nowadays. Organizations are progressively falling under strong worldwide pressure to integrate sustainability practices into their project decision-making development. Construction projects in the industry are amongst the most significant, since it is one of the biggest divisions and of main significance for the national economy and hence has a massive effect on the environment and society. So, it is important to discover approaches to incorporate sustainability into the management of those projects. This study presents a combined sustainability index of projects with sustainable transportation which has been formed as per a comprehensive literature review and survey study. Transportation systems enable the movement of goods and services worldwide, and it is leading to economic growth and creating jobs while creating negative impacts on the environment and society. This research is study to quantify the sustainability indicators, through 1) identifying the importance of sustainable transportation indicators that are based on the sustainable practices used for the construction projects and 2) measure the effectiveness of practices through these indicators on the three sustainable pillars. A total 26 sustainability indicators have been selected and grouped under each related sustainability pillars. A survey was used to collect the opinion about the sustainability indicators by a scoring system. A combined sustainability index considering three sustainable pillars can be helpful in evaluating the transportation sustainable practices of a project and making decisions regarding project selection. In addition to focus on the issue of financial resource allocation in a project selection, the decision-maker could take into account the sustainability as an important key in addition to the project’s return and risk. The purpose of this study is to measure the performance of transportation sustainability which allow companies to assess multiple projects selection. This is useful to decision makers to rank and focus more on future sustainable projects.Keywords: sustainable transportation, transportation performances, sustainable indicators, sustainable construction practice, sustainability
Procedia PDF Downloads 1412219 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil
Authors: Ana Julia C. Kfouri
Abstract:
A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort
Procedia PDF Downloads 3832218 Artificial Intelligence: Reimagining Education
Authors: Silvia Zanazzi
Abstract:
Artificial intelligence (AI) has become an integral part of our world, transitioning from scientific exploration to practical applications that impact daily life. The emergence of generative AI is reshaping education, prompting new questions about the role of teachers, the nature of learning, and the overall purpose of schooling. While AI offers the potential for optimizing teaching and learning processes, concerns about discrimination and bias arising from training data and algorithmic decisions persist. There is a risk of a disconnect between the rapid development of AI and the goals of building inclusive educational environments. The prevailing discourse on AI in education often prioritizes efficiency and individual skill acquisition. This narrow focus can undermine the importance of collaborative learning and shared experiences. A growing body of research challenges this perspective, advocating for AI that enhances, rather than replaces, human interaction in education. This study aims to examine the relationship between AI and education critically. Reviewing existing research will identify both AI implementation’s potential benefits and risks. The goal is to develop a framework that supports the ethical and effective integration of AI into education, ensuring it serves the needs of all learners. The theoretical reflection will be developed based on a review of national and international scientific literature on artificial intelligence in education. The primary objective is to curate a selection of critical contributions from diverse disciplinary perspectives and/or an inter- and transdisciplinary viewpoint, providing a state-of-the-art overview and a critical analysis of potential future developments. Subsequently, the thematic analysis of these contributions will enable the creation of a framework for understanding and critically analyzing the role of artificial intelligence in schools and education, highlighting promising directions and potential pitfalls. The expected results are (1) a classification of the cognitive biases present in representations of AI in education and the associated risks and (2) a categorization of potentially beneficial interactions between AI applications and teaching and learning processes, including those already in use or under development. While not exhaustive, the proposed framework will serve as a guide for critically exploring the complexity of AI in education. It will help to reframe dystopian visions often associated with technology and facilitate discussions on fostering synergies that balance the ‘dream’ of quality education for all with the realities of AI implementation. The discourse on artificial intelligence in education, highlighting reductionist models rooted in fragmented and utilitarian views of knowledge, has the merit of stimulating the construction of alternative perspectives that can ‘return’ teaching and learning to education, human growth, and the well-being of individuals and communities.Keywords: education, artificial intelligence, teaching, learning
Procedia PDF Downloads 192217 Time Domain Dielectric Relaxation Microwave Spectroscopy
Authors: A. C. Kumbharkhane
Abstract:
Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time
Procedia PDF Downloads 3352216 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 2482215 The Use of Video Conferencing to Aid the Decision in Whether Vulnerable Patients Should Attend In-Person Appointments during a COVID Pandemic
Authors: Nadia Arikat, Katharine Blain
Abstract:
During the worst of the COVID pandemic, only essential treatment was provided for patients needing urgent care. With the prolonged extent of the pandemic, there has been a return to more routine referrals for paediatric dentistry advice and treatment for specialist conditions. However, some of these patients and/or their carers may have significant medical issues meaning that attending in-person appointments carries additional risks. This poses an ethical dilemma for clinicians. This project looks at how a secure video conferencing platform (“Near Me”) has been used to assess the need and urgency for in-person new patient visits, particularly for patients and families with additional risks. “Near Me” is a secure online video consulting service used by NHS Scotland. In deciding whether to bring a new patient to the hospital for an appointment, the clinical condition of the teeth together with the urgency for treatment need to be assessed. This is not always apparent from the referral letter. In addition, it is important to judge the risks to the patients and carers of such visits, particularly if they have medical issues. The use and effectiveness of “Near Me” consultations to help decide whether vulnerable paediatric patients should have in-person appointments will be illustrated and discussed using two families: one where the child is medically compromised (Alagille syndrome with previous liver transplant), and the other where there is a medically compromised parent (undergoing chemotherapy and a bone marrow transplant). In both cases, it was necessary to take into consideration the risks and moral implications of requesting that they attend the dental hospital during a pandemic. The option of remote consultation allowed further clinical information to be evaluated and the families take part in the decision-making process about whether and when such visits should be scheduled. These cases will demonstrate how medically compromised patients (or patients with vulnerable carers), could have their dental needs assessed in a socially distanced manner by video consultation. Together, the clinician and the patient’s family can weigh up the risks, with regards to COVID-19, of attending for in-person appointments against the benefit of having treatment. This is particularly important for new paediatric patients who have not yet had a formal assessment. The limitations of this technology will also be discussed. It is limited by internet availability, the strength of the connection, the video quality and families owning a device which allows video calls. For those from a lower socio-economic background or living in some rural areas, this may not be possible or limit its usefulness. For the two patients discussed in this project, where the urgency of their dental condition was unclear, video consultation proved beneficial in deciding an appropriate outcome and preventing unnecessary exposure of vulnerable people to a hospital environment during a pandemic, demonstrating the usefulness of such technology when it is used appropriately.Keywords: COVID-19, paediatrics, triage, video consultations
Procedia PDF Downloads 972214 Determine Causal Factors Affecting the Responsiveness and Productivity of Non-Governmental Universities
Authors: Davoud Maleki
Abstract:
Today, education and investment in human capital is a long-term investment without which the economy will be stagnant Stayed. Higher education represents a type of investment in human resources by providing and improving knowledge, skills and Attitudes help economic development. Providing efficient human resources by increasing the efficiency and productivity of people and on the other hand with Expanding the boundaries of knowledge and technology and promoting technology such as the responsibility of training human resources and increasing productivity and efficiency in High specialized levels are the responsibility of universities. Therefore, the university plays an infrastructural role in economic development and growth because education by creating skills and expertise in people and improving their ability.In recent decades, Iran's higher education system has been faced with many problems, therefore, scholars have looked for it is to identify and validate the causal factors affecting the responsiveness and productivity of non-governmental universities. The data in the qualitative part is the result of semi-structured interviews with 25 senior and middle managers working in the units It was Islamic Azad University of Tehran province, which was selected by theoretical sampling method. In data analysis, stepwise method and Analytical techniques of Strauss and Corbin (1992) were used. After determining the central category (answering for the sake of the beneficiaries) and using it in order to bring the categories, expressions and ideas that express the relationships between the main categories and In the end, six main categories were identified as causal factors affecting the university's responsiveness and productivity.They are: 1- Scientism 2- Human resources 3- Creating motivation in the university 4- Development based on needs assessment 5- Teaching process and Learning 6- University quality evaluation. In order to validate the response model obtained from the qualitative stage, a questionnaire The questionnaire was prepared and the answers of 146 students of Master's degree and Doctorate of Islamic Azad University located in Tehran province were received. Quantitative data in the form of descriptive data analysis, first and second stage factor analysis using SPSS and Amos23 software were analyzed. The findings of the research indicated the relationship between the central category and the causal factors affecting the response The results of the model test in the quantitative stage confirmed the generality of the conceptual model.Keywords: accountability, productivity, non-governmental, universities, foundation data theory
Procedia PDF Downloads 572213 Evaluation of Bucket Utility Truck In-Use Driving Performance and Electrified Power Take-Off Operation
Authors: Robert Prohaska, Arnaud Konan, Kenneth Kelly, Adam Ragatz, Adam Duran
Abstract:
In an effort to evaluate the in-use performance of electrified Power Take-off (PTO) usage on bucket utility trucks operating under real-world conditions, data from 20 medium- and heavy-duty vehicles operating in California, USA were collected, compiled, and analyzed by the National Renewable Energy Laboratory's (NREL) Fleet Test and Evaluation team. In this paper, duty-cycle statistical analyses of class 5, medium-duty quick response trucks and class 8, heavy-duty material handler trucks are performed to examine and characterize vehicle dynamics trends and relationships based on collected in-use field data. With more than 100,000 kilometers of driving data collected over 880+ operating days, researchers have developed a robust methodology for identifying PTO operation from in-field vehicle data. Researchers apply this unique methodology to evaluate the performance and utilization of the conventional and electric PTO systems. Researchers also created custom representative drive-cycles for each vehicle configuration and performed modeling and simulation activities to evaluate the potential fuel and emissions savings for hybridization of the tractive driveline on these vehicles. The results of these analyses statistically and objectively define the vehicle dynamic and kinematic requirements for each vehicle configuration as well as show the potential for further system optimization through driveline hybridization. Results are presented in both graphical and tabular formats illustrating a number of key relationships between parameters observed within the data set that relates specifically to medium- and heavy-duty utility vehicles operating under real-world conditions.Keywords: drive cycle, heavy-duty (HD), hybrid, medium-duty (MD), PTO, utility
Procedia PDF Downloads 395