Search results for: higher heating value
129 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor
Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang
Abstract:
Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry
Procedia PDF Downloads 337128 Oncolytic Efficacy of Thymidine Kinase-Deleted Vaccinia Virus Strain Tiantan (oncoVV-TT) in Glioma
Authors: Seyedeh Nasim Mirbahari, Taha Azad, Mehdi Totonchi
Abstract:
Oncolytic viruses, which only replicate in tumor cells, are being extensively studied for their use in cancer therapy. A particular virus known as the vaccinia virus, a member of the poxvirus family, has demonstrated oncolytic abilities glioma. Treating Glioma with traditional methods such as chemotherapy and radiotherapy is quite challenging. Even though oncolytic viruses have shown immense potential in cancer treatment, their effectiveness in glioblastoma treatment is still low. Therefore, there is a need to improve and optimize immunotherapies for better results. In this study, we have designed oncoVV-TT, which can more effectively target tumor cells while minimizing replication in normal cells by replacing the thymidine kinase gene with a luc-p2a-GFP gene expression cassette. Human glioblastoma cell line U251 MG, rat glioblastoma cell line C6, and non-tumor cell line HFF were plated at 105 cells in a 12-well plates in 2 mL of DMEM-F2 medium with 10% FBS added to each well. Then incubated at 37°C. After 16 hours, the cells were treated with oncoVV-TT at an MOI of 0.01, 0.1 and left in the incubator for a further 24, 48, 72 and 96 hours. Viral replication assay, fluorescence imaging and viability tests, including trypan blue and crystal violet, were conducted to evaluate the cytotoxic effect of oncoVV-TT. The finding shows that oncoVV-TT had significantly higher cytotoxic activity and proliferation rates in tumor cells in a dose and time-dependent manner, with the strongest effect observed in U251 MG. To conclude, oncoVV-TT has the potential to be a promising oncolytic virus for cancer treatment, with a more cytotoxic effect in human glioblastoma cells versus rat glioma cells. To assess the effectiveness of vaccinia virus-mediated viral therapy, we have tested U251mg and C6 tumor cell lines taken from human and rat gliomas, respectively. The study evaluated oncoVV-TT's ability to replicate and lyse cells and analyzed the survival rates of the tested cell lines when treated with different doses of oncoVV-TT. Additionally, we compared the sensitivity of human and mouse glioma cell lines to the oncolytic vaccinia virus. All experiments regarding viruses were conducted under biosafety level 2. We engineered a Vaccinia-based oncolytic virus called oncoVV-TT to replicate specifically in tumor cells. To propagate the oncoVV-TT virus, HeLa cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 10 MOI virus was added. After 48 h, cells were harvested by scraping, and viruses were collected by 3 sequential freezing and thawing cycles followed by removal of cell debris by centrifugation (1500 rpm, 5 min). The supernatant was stored at −80 ◦C for the following experiments. To measure the replication of the virus in Hela, cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 5 MOI virus or equal dilution of PBS was added. At the treatment time of 0 h, 24 h, 48 h, 72 h and 96 h, the viral titers were determined under the fluorescence microscope (BZ-X700; Keyence, Osaka, Japan). Fluorescence intensity was quantified using the imagej software according to the manufacturer’s protocol. For the isolation of single-virus clones, HeLa cells seeded in six-well plates (5×105 cells/well). After 24 h (100% confluent), the cells were infected with a 10-fold dilution series of TianTan green fluorescent protein (GFP)virus and incubated for 4 h. To examine the cytotoxic effect of oncoVV-TT virus ofn U251mg and C6 cell, trypan blue and crystal violet assay was used.Keywords: oncolytic virus, immune therapy, glioma, vaccinia virus
Procedia PDF Downloads 79127 Poly(Trimethylene Carbonate)/Poly(ε-Caprolactone) Phase-Separated Triblock Copolymers with Advanced Properties
Authors: Nikola Toshikj, Michel Ramonda, Sylvain Catrouillet, Jean-Jacques Robin, Sebastien Blanquer
Abstract:
Biodegradable and biocompatible block copolymers have risen as the golden materials in both medical and environmental applications. Moreover, if their architecture is of controlled manner, higher applications can be foreseen. In the meantime, organocatalytic ROP has been promoted as more rapid and immaculate route, compared to the traditional organometallic catalysis, towards efficient synthesis of block copolymer architectures. Therefore, herein we report novel organocatalytic pathway with guanidine molecules (TBD) for supported synthesis of trimethylene carbonate initiated by poly(caprolactone) as pre-polymer. Pristine PTMC-b-PCL-b-PTMC block copolymer structure, without any residual products and clear desired block proportions, was achieved under 1.5 hours at room temperature and verified by NMR spectroscopies and size-exclusion chromatography. Besides, when elaborating block copolymer films, further stability and amelioration of mechanical properties can be achieved via additional reticulation step of precedently methacrylated block copolymers. Subsequently, stimulated by the insufficient studies on the phase-separation/crystallinity relationship in these semi-crystalline block copolymer systems, their intrinsic thermal and morphology properties were investigated by differential scanning calorimetry and atomic force microscopy. Firstly, by DSC measurements, the block copolymers with χABN values superior to 20 presented two distinct glass transition temperatures, close to the ones of the respecting homopolymers, demonstrating an initial indication of a phase-separated system. In the interim, the existence of the crystalline phase was supported by the presence of melting temperature. As expected, the crystallinity driven phase-separated morphology predominated in the AFM analysis of the block copolymers. Neither crosslinking at melted state, hence creation of a dense polymer network, disturbed the crystallinity phenomena. However, the later revealed as sensible to rapid liquid nitrogen quenching directly from the melted state. Therefore, AFM analysis of liquid nitrogen quenched and crosslinked block copolymer films demonstrated a thermodynamically driven phase-separation clearly predominating over the originally crystalline one. These AFM films remained stable with their morphology unchanged even after 4 months at room temperature. However, as demonstrated by DSC analysis once rising the temperature above the melting temperature of the PCL block, neither the crosslinking nor the liquid nitrogen quenching shattered the semi-crystalline network, while the access to thermodynamical phase-separated structures was possible for temperatures under the poly (caprolactone) melting point. Precisely this coexistence of dual crosslinked/crystalline networks in the same copolymer structure allowed us to establish, for the first time, the shape-memory properties in such materials, as verified by thermomechanical analysis. Moreover, the response temperature to the material original shape depended on the block copolymer emplacement, hence PTMC or PCL as end-block. Therefore, it has been possible to reach a block copolymer with transition temperature around 40°C thus opening potential real-life medical applications. In conclusion, the initial study of phase-separation/crystallinity relationship in PTMC-b-PCL-b-PTMC block copolymers lead to the discovery of novel shape memory materials with superior properties, widely demanded in modern-life applications.Keywords: biodegradable block copolymers, organocatalytic ROP, self-assembly, shape-memory
Procedia PDF Downloads 128126 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation
Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya
Abstract:
The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity
Procedia PDF Downloads 184125 Phytochemical Analysis and in vitro Biological Activities of an Ethyl Acetate Extract from the Peel of Punica granatum L. var. Dente di Cavallo
Authors: Silvia Di Giacomo, Marcello Locatelli, Simone Carradori, Francesco Cacciagrano, Chiara Toniolo, Gabriela Mazzanti, Luisa Mannina, Stefania Cesa, Antonella Di Sotto
Abstract:
Hyperglycemia represents the main pathogenic factor in the development of diabetes complications and has been found associated with mitochondrial dysfunction and oxidative stress, which in turn increase cell dysfunction. Therefore, counteract oxidative species appears to be a suitable strategy for preventing the hyperglycemia-induce cell damage and support the pharmacotherapy of diabetes and metabolic diseases. Antidiabetic potential of many food sources has been linked to the presence of polyphenolic metabolites, particularly flavonoids such as quercetin and its glycosylated form rutin. In line with this evidence, in the present study, we assayed the potential anti-hyperglycemic activity of an ethyl acetate extract from the peel of Punica granatum L. var. Dente di Cavallo (PGE), a fruit well known to traditional medicine for the beneficial properties of its edible juice. The effect of the extract on the glucidic metabolism has been evaluated by assessing its ability to inhibit α-amylase and α-glucosidase, two digestive enzymes responsible for the hydrolysis of dietary carbohydrates: their inhibition can delay the carbohydrate digestion and reduce glucose absorption, thus representing an important strategy for the management of hyperglycemia. Also, the PGE ability to block the release of advanced glycated end-products (AGEs), whose accumulation is known to be responsible for diabetic vascular complications, was studied. The iron-reducing and chelating activities, which are the primary mechanisms by which AGE inhibitors stop their metal-catalyzed formation, were evaluated as possible antioxidant mechanisms. At last, the phenolic content of PGE was characterized by chromatographic and spectrophotometric methods. Our results displayed the ability of PGE to inhibit α-amylase enzyme with a similar potency to the positive control: the IC₅₀ values were 52.2 (CL 27.7 - 101.2) µg/ml and 35.6 (CL 22.8 - 55.5) µg/ml for acarbose and PGE, respectively. PGE also inhibited the α-glucosidase enzyme with about a 25 higher potency than the positive controls of acarbose and quercetin. Furthermore, the extract exhibited ferrous and ferric ion chelating ability, with a maximum effect of 82.1% and 80.6% at a concentration of 250 µg/ml respectively, and reducing properties, reaching the maximum effect of 80.5% at a concentration of 10 µg/ml. At last, PGE was found able to inhibit the AGE production (maximum inhibition of 82.2% at the concentration of 1000 µg/ml), although with lower potency with respect to the positive control rutin. The phytochemical analysis of PGE displayed the presence of high levels of total polyphenols, tannins, and flavonoids, among which ellagic acid, gallic acid and catechin were identified. Altogether these data highlight the ability of PGE to control the carbohydrate metabolism at different levels, both by inhibiting the metabolic enzymes and by affecting the AGE formation likely by chelating mechanisms. It is also noteworthy that peel from pomegranate, although being a waste of juice production, can be reviewed as a nutraceutical source. In conclusion, present results suggest the possible role of PGE as a remedy for preventing hyperglycemia complications and encourage further in vivo studies.Keywords: anti-hyperglycemic activity, antioxidant properties, nutraceuticals, polyphenols, pomegranate
Procedia PDF Downloads 185124 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 145123 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion
Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty
Abstract:
Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors
Procedia PDF Downloads 154122 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 32121 Global Evidence on the Seasonality of Enteric Infections, Malnutrition, and Livestock Ownership
Authors: Aishwarya Venkat, Anastasia Marshak, Ryan B. Simpson, Elena N. Naumova
Abstract:
Livestock ownership is simultaneously linked to improved nutritional status through increased availability of animal-source protein, and increased risk of enteric infections through higher exposure to contaminated water sources. Agrarian and agro-pastoral households, especially those with cattle, goats, and sheep, are highly dependent on seasonally various environmental conditions, which directly impact nutrition and health. This study explores global spatiotemporally explicit evidence regarding the relationship between livestock ownership, enteric infections, and malnutrition. Seasonal and cyclical fluctuations, as well as mediating effects, are further examined to elucidate health and nutrition outcomes of individual and communal livestock ownership. The US Agency for International Development’s Demographic and Health Surveys (DHS) and the United Nations International Children's Emergency Fund’s Multi-Indicator Cluster Surveys (MICS) provide valuable sources of household-level information on anthropometry, asset ownership, and disease outcomes. These data are especially important in data-sparse regions, where surveys may only be conducted in the aftermath of emergencies. Child-level disease history, anthropometry, and household-level asset ownership information have been collected since DHS-V (2003-present) and MICS-III (2005-present). This analysis combines over 15 years of survey data from DHS and MICS to study 2,466,257 children under age five from 82 countries. Subnational (administrative level 1) measures of diarrhea prevalence, mean livestock ownership by type, mean and median anthropometric measures (height for age, weight for age, and weight for height) were investigated. Effects of several environmental, market, community, and household-level determinants were studied. Such covariates included precipitation, temperature, vegetation, the market price of staple cereals and animal source proteins, conflict events, livelihood zones, wealth indices and access to water, sanitation, hygiene, and public health services. Children aged 0 – 6 months, 6 months – 2 years, and 2 – 5 years of age were compared separately. All observations were standardized to interview day of year, and administrative units were harmonized for consistent comparisons over time. Geographically weighted regressions were constructed for each outcome and subnational unit. Preliminary results demonstrate the importance of accounting for seasonality in concurrent assessments of malnutrition and enteric infections. Household assets, including livestock, often determine the intensity of these outcomes. In many regions, livestock ownership affects seasonal fluxes in malnutrition and enteric infections, which are also directly affected by environmental and local factors. Regression analysis demonstrates the spatiotemporal variability in nutrition outcomes due to a variety of causal factors. This analysis presents a synthesis of evidence from global survey data on the interrelationship between enteric infections, malnutrition, and livestock. These results provide a starting point for locally appropriate interventions designed to address this nexus in a timely manner and simultaneously improve health, nutrition, and livelihoods.Keywords: diarrhea, enteric infections, households, livestock, malnutrition, seasonality
Procedia PDF Downloads 126120 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 20119 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator
Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra
Abstract:
We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics
Procedia PDF Downloads 143118 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 62117 Horizontal Cooperative Game Theory in Hotel Revenue Management
Authors: Ririh Rahma Ratinghayu, Jayu Pramudya, Nur Aini Masruroh, Shi-Woei Lin
Abstract:
This research studies pricing strategy in cooperative setting of hotel duopoly selling perishable product under fixed capacity constraint by using the perspective of managers. In hotel revenue management, competitor’s average room rate and occupancy rate should be taken into manager’s consideration in determining pricing strategy to generate optimum revenue. This information is not provided by business intelligence or available in competitor’s website. Thus, Information Sharing (IS) among players might result in improved performance of pricing strategy. IS is widely adopted in the logistics industry, but IS within hospitality industry has not been well-studied. This research put IS as one of cooperative game schemes, besides Mutual Price Setting (MPS) scheme. In off-peak season, hotel manager arranges pricing strategy to offer promotion package and various kinds of discounts up to 60% of full-price to attract customers. Competitor selling homogenous product will react the same, then triggers a price war. Price war which generates lower revenue may be avoided by creating collaboration in pricing strategy to optimize payoff for both players. In MPS cooperative game, players collaborate to set a room rate applied for both players. Cooperative game may avoid unfavorable players’ payoff caused by price war. Researches on horizontal cooperative game in logistics show better performance and payoff for the players, however, horizontal cooperative game in hotel revenue management has not been demonstrated. This paper aims to develop hotel revenue management models under duopoly cooperative schemes (IS & MPS), which are compared to models under non-cooperative scheme too. Each scheme has five models, Capacity Allocation Model; Demand Model; Revenue Model; Optimal Price Model; and Equilibrium Price Model. Capacity Allocation Model and Demand Model employs self-hotel and competitor’s full and discount price as predictors under non-linear relation. Optimal price is obtained by assuming revenue maximization motive. Equilibrium price is observed by interacting self-hotel’s and competitor’s optimal price under reaction equation. Equilibrium is analyzed using game theory approach. The sequence applies for three schemes. MPS Scheme differently aims to optimize total players’ payoff. The case study in which theoretical models are applied observes two hotels offering homogenous product in Indonesia during a year. The Capacity Allocation, Demand, and Revenue Models are built using multiple regression and statistically tested for validation. Case study data confirms that price behaves within demand model in a non-linear manner. IS Models can represent the actual demand and revenue data better than Non-IS Models. Furthermore, IS enables hotels to earn significantly higher revenue. Thus, duopoly hotel players in general, might have reasonable incentives to share information horizontally. During off-peak season, MPS Models are able to predict the optimal equal price for both hotels. However, Nash equilibrium may not always exist depending on actual payoff of adhering or betraying mutual agreement. To optimize performance, horizontal cooperative game may be chosen over non-cooperative game. Mathematical models can be used to detect collusion among business players. Empirical testing can be used as policy input for market regulator in preventing unethical business practices potentially harming society welfare.Keywords: horizontal cooperative game theory, hotel revenue management, information sharing, mutual price setting
Procedia PDF Downloads 289116 Identifying Effective Strategies to Promote Vietnamese Fashion Brands in an Internationally Dominated Market
Authors: Lam Hong Lan, Gabor Sarlos
Abstract:
It is hard to search for best practices in promotion for local fashion brands in Vietnam as the industry is still very young. Local fashion start-ups have grown quickly in the last five years, thanks in part to the internet and social media. However, local designer/owners can face a huge challenge when competing with international brands in the Vietnamese market – and few local case studies are available for guidance. In response, this paper studied how local small- to medium-sized enterprises (SMEs) promote to their target customers in order to compete with international brands. Knowledge of both successful and unsuccessful approaches generated by this study is intended to both contribute to the academic literature on local fashion in Vietnam as well as to help local designers to learn from and improve their brand-building strategy. The primary study featured qualitative data collection via semi-structured depth interviews. Transcription and data analysis were conducted manually in order to identify success factors that local brands should consider as part of their promotion strategy. Purposive sampling of SMEs identified five designers in Ho Chi Minh City (the biggest city in Vietnam) and three designers in Hanoi (the second biggest) as interviewees. Participant attributes included: born in the 1980s or 1990s; familiar with internet and social media; designer/owner of a successful local fashion brand in the key middle market and/or mass market segments (which are crucial to the growth of local brands). A secondary study was conducted using social listening software to gather further qualitative data on what were considered to be successful or unsuccessful approaches to local fashion brand promotion on social media. Both the primary and secondary studies indicated that local designers had maximized their promotion budget by using owned media and earned media instead of paid media. Findings from the qualitative interviews indicate that internet and social media have been used as effective promotion platforms by local fashion start-ups. Facebook and Instagram were the most popular social networks used by the SMEs interviewed, and these social platforms were believed to offer a more affordable promotional strategy than traditional media such as TV and/or print advertising. Online stores were considered an important factor in helping the SMEs to reach customers beyond the physical store. Furthermore, a successful online store allowed some SMEs to reduce their business rental costs by maintaining their physical store in a cheaper, less central city area as opposed to a more traditional city center store location. In addition, the small comparative size of the SMEs allowed them to be more attentive to their customers, leading to higher customer satisfaction and rate of return. In conclusion, this study found that these kinds of cost savings helped the SMEs interviewed to focus their scarce resources on producing unique, high-quality collections in order to differentiate themselves from international brands. Facebook and Instagram were the main platforms used for promotion and brand-building. The main challenge to this promotion strategy identified by the SMEs interviewed was to continue to find innovative ways to maximize the impact of a limited marketing budget.Keywords: Vietnam, SMEs, fashion brands, promotion, marketing, social listening
Procedia PDF Downloads 125115 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy
Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro
Abstract:
Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.
Procedia PDF Downloads 261114 Benchmarking of Petroleum Tanker Discharge Operations at a Nigerian Coastal Terminal and Jetty Facilitates Optimization of the Ship–Shore Interface
Authors: Bassey O. Bassey
Abstract:
Benchmarking has progressively become entrenched as a requisite activity for process improvement and enhancing service delivery at petroleum jetties and terminals, most especially during tanker discharge operations at the ship – shore interface, as avoidable delays result in extra operating costs, non-productive time, high demurrage payments and ultimate product scarcity. The jetty and terminal in focus had been operational for 3 and 8 years respectively, with proper operational and logistic records maintained to evaluate their progress over time in order to plan and implement modifications and review of procedures for greater technical and economic efficiency. Regular and emergency staff meetings were held on a team, departmental and company-wide basis to progressively address major challenges that were encountered during each operation. The process and outcome of the resultant collectively planned changes carried out within the past two years forms the basis of this paper, which mirrors the initiatives effected to enhance operational and maintenance excellence at the affected facilities. Operational modifications included a second cargo receipt line designated for gasoline, product loss control at jetty and shore ends, enhanced product recovery and quality control, and revival of terminal–jetty backloading operations. Logistic improvements were the incorporation of an internal logistics firm and shipping agency, fast tracking of discharge procedures for tankers, optimization of tank vessel selection process, and third party product receipt and throughput. Maintenance excellence was achieved through construction of two new lay barges and refurbishment of the existing one; revamping of existing booster pump and purchasing of a modern one as reserve capacity; extension of Phase 1 of the jetty to accommodate two vessels and construction of Phase 2 for two more vessels; regular inspection, draining, drying and replacement of cargo hoses; corrosion management program for all process facilities; and an improved, properly planned and documented maintenance culture. Safety, environmental and security compliance were enhanced by installing state-of-the-art fire fighting facilities and equipment, seawater intake line construction as backup for borehole at the terminal, remediation of the shoreline and marine structures, modern spill containment equipment, improved housekeeping and accident prevention practices, and installation of hi-technology security enhancements, among others. The end result has been observed over the past two years to include improved tanker turnaround time, higher turnover on product sales, consistent product availability, greater indigenous human capacity utilisation by way of direct hires and contracts, as well as customer loyalty. The lessons learnt from this exercise would, therefore, serve as a model to be adapted by other operators of similar facilities, contractors, academics and consultants in a bid to deliver greater sustainability and profitability of operations at the ship – shore interface to this strategic industry.Keywords: benchmarking, optimisation, petroleum jetty, petroleum terminal
Procedia PDF Downloads 366113 Structured-Ness and Contextual Retrieval Underlie Language Comprehension
Authors: Yao-Ying Lai, Maria Pinango, Ashwini Deo
Abstract:
While grammatical devices are essential to language processing, how comprehension utilizes cognitive mechanisms is less emphasized. This study addresses this issue by probing the complement coercion phenomenon: an entity-denoting complement following verbs like begin and finish receives an eventive interpretation. For example, (1) “The queen began the book” receives an agentive reading like (2) “The queen began [reading/writing/etc.…] the book.” Such sentences engender additional processing cost in real-time comprehension. The traditional account attributes this cost to an operation that coerces the entity-denoting complement to an event, assuming that these verbs require eventive complements. However, in closer examination, examples like “Chapter 1 began the book” undermine this assumption. An alternative, Structured Individual (SI) hypothesis, proposes that the complement following aspectual verbs (AspV; e.g. begin, finish) is conceptualized as a structured individual, construed as an axis along various dimensions (e.g. spatial, eventive, temporal, informational). The composition of an animate subject and an AspV such as (1) engenders an ambiguity between an agentive reading along the eventive dimension like (2), and a constitutive reading along the informational/spatial dimension like (3) “[The story of the queen] began the book,” in which the subject is interpreted as a subpart of the complement denotation. Comprehenders need to resolve the ambiguity by searching contextual information, resulting in additional cost. To evaluate the SI hypothesis, a questionnaire was employed. Method: Target AspV sentences such as “Shakespeare began the volume.” were preceded by one of the following types of context sentence: (A) Agentive-biasing, in which an event was mentioned (…writers often read…), (C) Constitutive-biasing, in which a constitutive meaning was hinted (Larry owns collections of Renaissance literature.), (N) Neutral context, which allowed both interpretations. Thirty-nine native speakers of English were asked to (i) rate each context-target sentence pair from a 1~5 scale (5=fully understandable), and (ii) choose possible interpretations for the target sentence given the context. The SI hypothesis predicts that comprehension is harder for the Neutral condition, as compared to the biasing conditions because no contextual information is provided to resolve an ambiguity. Also, comprehenders should obtain the specific interpretation corresponding to the context type. Results: (A) Agentive-biasing and (C) Constitutive-biasing were rated higher than (N) Neutral conditions (p< .001), while all conditions were within the acceptable range (> 3.5 on the 1~5 scale). This suggests that when lacking relevant contextual information, semantic ambiguity decreases comprehensibility. The interpretation task shows that the participants selected the biased agentive/constitutive reading for condition (A) and (C) respectively. For the Neutral condition, the agentive and constitutive readings were chosen equally often. Conclusion: These findings support the SI hypothesis: the meaning of AspV sentences is conceptualized as a parthood relation involving structured individuals. We argue that semantic representation makes reference to spatial structured-ness (abstracted axis). To obtain an appropriate interpretation, comprehenders utilize contextual information to enrich the conceptual representation of the sentence in question. This study connects semantic structure to human’s conceptual structure, and provides a processing model that incorporates contextual retrieval.Keywords: ambiguity resolution, contextual retrieval, spatial structured-ness, structured individual
Procedia PDF Downloads 333112 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 18111 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors
Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov
Abstract:
Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model
Procedia PDF Downloads 218110 Human Wildlife Conflict Outside Protected Areas of Nepal: Causes, Consequences and Mitigation Strategies
Authors: Kedar Baral
Abstract:
This study was carried out in Mustang, Kaski, Tanahun, Baitadi, and Jhapa districts of Nepal. The study explored the spatial and temporal pattern of HWC, socio economic factors associated with it, impacts of conflict on life / livelihood of people and survival of wildlife species, and impact of climate change and forest fire onHWC. Study also evaluated people’s attitude towards wildlife conservation and assessed relevant policies and programs. Questionnaire survey was carried out with the 250 respondents, and both socio-demographic and HWC related information werecollected. Secondary information were collected from Divisional Forest Offices and Annapurna Conservation Area Project.HWC events were grouped by season /months/sites (forest type, distances from forest, and settlement), and the coordinates of the events were exported to ArcGIS. Collected data were analyzed using descriptive statistics in Excel and R Program. A total of 1465 events were recorded in 5 districts during 2015 and 2019. Out of that, livestock killing, crop damage, human attack, and cattle shed damage events were 70 %, 12%, 11%, and 7%, respectively. Among 151 human attack cases, 23 people were killed, and 128 were injured. Elephant in Terai, common leopard and monkey in Middle Mountain, and snow leopard in high mountains were found as major problematic animals. Common leopard attacks were found more in the autumn, evening, and on human settlement area. Whereas elephant attacks were found higher in winter, day time, and on farmland. Poor people farmers were found highly victimized, and they were losing 26% of their income due to crop raiding and livestock depredation. On the other hand, people are killing many wildlife in revenge, and this number is increasing every year. Based on the people's perception, climate change is causing increased temperature and forest fire events and decreased water sources within the forest. Due to the scarcity of food and water within forests, wildlife are compelled to dwell at human settlement area, hence HWC events are increasing. Nevertheless, more than half of the respondents were found positive about conserving entire wildlife species. Forests outside PAs are under the community forestry (CF) system, which restored the forest, improved the habitat, and increased the wildlife.However, CF policies and programs were found to be more focused on forest management with least priority on wildlife conservation and HWC mitigation. Compensation / relief scheme of government for wildlife damage was found some how effective to manage HWC, but the lengthy process, being applicable to the damage of few wildlife species and highly increasing events made it necessary to revisit. Based on these facts, the study suggest to carry out awareness generation activities to the poor farmers, linking the property of people with the insurance scheme, conducting habitat management activities within CF, promoting the unpalatable crops, improvement of shed house of livestock, simplifying compensation scheme and establishing a fund at the district level and incorporating the wildlife conservation and HWCmitigation programs in CF. Finally, the study suggests to carry out rigorous researches to understand the impacts of current forest management practices on forest, biodiversity, wildlife, and HWC.Keywords: community forest, conflict mitigation, wildlife conservation, climate change
Procedia PDF Downloads 117109 Palynological Investigation and Quality Determination of Honeys from Some Apiaries in Northern Nigeria
Authors: Alebiosu Olugbenga Shadrak, Victor Victoria
Abstract:
Honey bees exhibit preferences in their foraging behaviour on pollen and nectar for food and honey production, respectively. Melissopalynology is the study of pollen in honey and other honey products. Several work have been conducted on the palynological studies of honeys from the southern parts of Nigeria but with relatively scant records from the Northern region of the country. This present study aimed at revealing the favourably visited plants by honey bees, Apis melifera var. adansonii, at some apiaries in Northern Nigeria, as well as determining the quality of honeys produced. Honeys were harvested and collected from four apiaries of the region, namely: Sarkin Dawa missionary bee farm, Taraba State; Eleeshuwa Bee Farm, Keffi, Nassarawa State, Bulus Beekeeper Apiaries, Kagarko, Kaduna State and Mai Gwava Bee Farm, Kano State. These honeys were acetolysed for palynological microscopic analysis and subjected to standard treatment methods for the determination of their proximate composition and sugar profiling. Fresh anthers of two dominantly represented plants in the honeys were then collected for the quantification of their pollen protein contents, using the micro-kjeldhal procedure. A total of 30 pollen types were identified in the four honeys, and some of them were common to the honeys. A classification method for expressing pollen frequency class was employed: Senna cf. siamea, Terminalia cf. catappa, Mangifera indica, Parinari curatelifolia, Vitellaria paradoxa, Elaeis guineensis, Parkia biglobosa, Phyllantus muellerianus and Berlina Grandiflora, as “Frequent” (16-45%); while the others are either Rare (3-15%) or Sporadic (less than 3 %). Pollen protein levels of the two abundantly represented plants, Senna siamea (15.90mg/ml) and Terminalia catappa (17.33mg/ml) were found to be considerably lower. The biochemical analyses revealed varying amounts of proximate composition, non-reducing sugar and total sugar levels in the honeys. The results of this study indicate that pollen and nectar of the “Frequent” plants were preferentially foraged by honeybees in the apiaries. The estimated pollen protein contents of Senna same and Terminalia catappa were considerably lower and not likely to have influenced their favourable visitation by honeybees. However, a relatively higher representation of Senna cf. siamea in the pollen spectrum might have resulted from its characteristic brightly coloured and well scented flowers, aiding greater entomophily. Terminalia catappa, Mangifera indica, Elaeis guineensis, Vitellaria paradoxa, and Parkia biglobosa are typical food crops; hence they probably attracted the honeybees owing to the rich nutritional values of their fruits and seeds. Another possible reason for a greater entomophily of the favourably visited plants are certain nutritional constituents of their pollen and nectar, which were not investigated in this study. The nutritional composition of the honeys was observed to fall within the safe limits of international norms, as prescribed by Codex Alimentarius Commission, thus they are good honeys for human consumption. It is therefore imperative to adopt strategic conservation steps in ensuring that these favourably visited plants are protected from indiscriminate anthropogenic activities and also encourage apiarists in the country to establish their bee farms more proximally to the plants for optimal honey yield.Keywords: honeybees, melissopalynology, preferentially foraged, nutritional, bee farms, proximally
Procedia PDF Downloads 278108 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions
Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos
Abstract:
The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon
Procedia PDF Downloads 142107 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases
Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar
Abstract:
Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa
Procedia PDF Downloads 76106 Regulatory and Economic Challenges of AI Integration in Cyber Insurance
Authors: Shreyas Kumar, Mili Shangari
Abstract:
Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware
Procedia PDF Downloads 33105 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials
Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco
Abstract:
Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites
Procedia PDF Downloads 267104 Silk Fibroin-PVP-Nanoparticles-Based Barrier Membranes for Tissue Regeneration
Authors: Ivone R. Oliveira, Isabela S. Gonçalves, Tiago M. B. Campos, Leandro J. Raniero, Luana M. R. Vasconcellos, João H. Lopes
Abstract:
Originally, the principles of guided tissue/bone regeneration (GTR/GBR) were followed to restore the architecture and functionality of the periodontal system. In essence, a biocompatible polymer-based occlusive membrane is used as a barrier to prevent migration of epithelial and connective tissue to the regenerating site. In this way, progenitor cells located in the remaining periodontal ligament can recolonize the root area and differentiate into new periodontal tissues, alveolar bone, and new connective attachment. The use of synthetic or collagen-derived membranes with or without calcium phosphate-based bone graft materials has been the treatment used. Ideally, these membranes need to exhibit sufficient initial mechanical strength to allow handling and implantation, withstand the various mechanical stresses suffered during surgery while maintaining their integrity, and support the process of bone tissue regeneration and repair by resisting cellular traction forces and wound contraction forces during tissue healing in vivo. Although different RTG/ROG products are available on the market, they have serious deficiencies in terms of mechanical strength. Aiming to improve the mechanical strength and osteogenic properties of the membrane, this work evaluated the production of membranes that integrate the biocompatibility of the natural polymer (silk fibroin - FS) and the synthetic polymer poly(vinyl pyrrolidone - PVP) with graphene nanoplates (NPG) and gold nanoparticles (AuNPs), using the electrospinning equipment (AeroSpinner L1.0 from Areka) which allows the execution of high voltage spinning and/or solution blowing and with a high production rate, enabling development on an industrial scale. Silk fibroin uniquely solved many of the problems presented by collagen and was used in this work because it has unique combined merits, such as programmable biodegradability, biocompatibility and sustainable large-scale production. Graphene has attracted considerable attention in recent years as a potential biomaterial for mechanical reinforcement because of its unique physicochemical properties and was added to improve the mechanical properties of the membranes associated or not with the presence of AuNPs, which have shown great potential in regulating osteoblast activity. The preparation of FS from silkworm cocoons involved cleaning, degumming, dissolution in lithium bromide, dialysis, lyophilization and dissolution in hexafluoroisopropanol (HFIP) to prepare the solution for electrospinning, and crosslinking tests were performed in methanol. The NPGs were characterized and underwent treatment in nitric acid for functionalization to improve the adhesion of the nanoplates to the PVP fibers. PVP-NPG membranes were produced with 0.5, 1.0 and 1.5 wt% functionalized or not and evaluated by SEM/FEG, FTIR, mechanical strength and cell culture assays. Functionalized GNP particles showed stronger binding, remaining adhered to the fibers. Increasing the graphene content resulted in higher mechanical strength of the membrane and greater biocompatibility. The production of FS-PVP-NPG-AuNPs hybrid membranes was performed by electrospinning in separate syringes and simultaneously the FS solution and the solution containing PVP-NPG 1.5 wt% in the presence or absence of AuNPs. After cross-linking, they were characterized by SEM/FEG, FTIR and behavior in cell culture. The presence of NPG-AuNPs increased the viability and the presence of mineralization nodules.Keywords: barrier membranes, silk fibroin, nanoparticles, tissue regeneration.
Procedia PDF Downloads 12103 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome
Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.
Abstract:
The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant
Procedia PDF Downloads 141102 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change
Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo
Abstract:
With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation
Procedia PDF Downloads 75101 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School
Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom
Abstract:
The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.Keywords: productivity, quality, learning management system, agency-based graduate school
Procedia PDF Downloads 319100 Advocating for Indigenous Music in Latin American Music Education
Authors: Francisco Luis Reyes
Abstract:
European colonization had a profound impact on Latin America. The influence of the old continent can be perceived in the culture, religion, and language of the region as well as the beliefs and attitudes of the population. Music education is not an exception to this phenomenon. With Europeans controlling cultural life and erecting educational institutions across the continent for several centuries, Western European Art Music (WEAM) has polarized music learning in formal spaces. In contrast, the musics from the indigenous population, the African slaves, and the ones that emerged as a result of the cultural mélanges have largely been excluded from primary and secondary schooling. The purpose of this paper is to suggest the inclusion of indigenous music education in primary and secondary music education. The paper employs a philosophical inquiry in order to achieve this aim. Philosophical inquiry seeks to uncover and examine individuals' unconscious beliefs, principles, values, and assumptions to envision potential possibilities. This involves identifying and describing issues within current music teaching and learning practices. High-quality philosophical research tackles problems that are sufficiently narrow (addressing a specific aspect of a single complex topic), realistic (reflecting the experiences of music education), and significant (addressing a widespread and timely issue). Consequently, this methodological approach fits this topic, as the research addresses the omnipresence of WEAM in Latin American music education, the exclusion of indigenous music, and argues about the transformational impact said artistic expressions can have on practices in the region. The paper initially addresses how WEAM became ubiquitous in the region by recounting historical events, and adressing the issues other types of music face entering higher education. According to Shifres and Rosabal-Coto (2017) Latin America still upholds the musical heritage of their colonial period, and its formal music education institutions promote the European ontology instilled during European expansion. In accordance, the work of Reyes and Lorenzo-Quiles (2024), and Soler, Lorenzo-Quiles, and Hargreaves (2014), demonstrate how music institutions in the region uphold foreign narratives. Their studies show that music programs in Puerto Rico and Colombia instruct students in WEAM as well as require skills in said art form to enter the profession, just like other authors have argued (Cain & Walden, 2019, Walden, 2016). Subsequently, the research explains the issues faced by prospective music educators that do not practice WEAM. Roberts (1991a, 1991b, 1993), Green (2012) have found that music education students that do not adhere to the musical culture of their institution, are less likely to finish their degrees. Hence, practicioners of tradional musics might feel out of place in the environment. The ubiquity of WEAM and the exclusion of traditional musics of the region, provide the primary challenges to the inclusion of indigenous musics in formal spaces in primary and secondary education. The presentation then laids the framework for the inclusion indigenous music, and conclusively offers examples of how the musical expressions from the continent can improove the music education practices of the region. As an ending, the article highlights the benefits of these musics that are lacking in current practices.Keywords: indigenous music education, postmodern music education, decolonization in music education, music education practice, Latin American music education
Procedia PDF Downloads 33