Search results for: statistical distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8578

Search results for: statistical distribution

208 Mathematical Modeling of Avascular Tumor Growth and Invasion

Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler

Abstract:

Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.

Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids

Procedia PDF Downloads 106
207 Longitudinal impact on Empowerment for Ugandan Women with Post-Primary Education

Authors: Shelley Jones

Abstract:

Assumptions abound that education for girls will, as a matter of course, lead to their economic empowerment as women; yet. little is known about the ways in which schooling for girls, who traditionally/historically would not have had opportunities for post-primary, or perhaps even primary education – such as the participants in this study based in rural Uganda - in reality, impacts their economic situations. There is a need forlongitudinal studies in which women share experiences, understandings, and reflections of their lives that can inform our knowledge of this. In response, this paper reports on stage four of a longitudinal case study (2004-2018) focused on education and empowerment for girls and women in rural Uganda, in which 13 of the 15 participants from the original study participated. This paper understands empowerment as not simply increased opportunities (e.g., employment) but also real gains in power, freedoms that enable agentive action, and authentic and viable choices/alternatives that offer ‘exit options’ from unsatisfactory situations. As with the other stages, this study used a critical, postmodernist, global feminist ethnographic methodology, multimodal and qualitative data collection. Participants participated in interviews, focus group discussions, and a two-day workshop, which explored their understandings of how/if they understood post-primary education to have contributed to their economic empowerment. A constructivist grounded theory approach was used for data analysis to capture major themes. Findings indicate that although all participants believe that post-primary education provided them with economic opportunities they would not have had otherwise, the parameters of their economic empowerment were severely constrained by historic and extant sociocultural, economic, political, and institutional structures that continue to disempower girls and women, as well as additional financial responsibilities that they assumed to support others. Even though the participants had post-primary education, and they were able to obtain employment or operate their own businesses that they would not likely have been able to do without post-primary education, the majority of the participants’ incomes were not sufficient to elevate them financially above the extreme poverty level, especially as many were single mothers and the sole income earners in their households. Furthermore, most deemed their working conditions unsatisfactory and their positions precarious; they also experienced sexual harassment and abuse in the labour force. Additionally, employment for the participants resulted in a double work burden: long days at work, surrounded by many hours of domestic work at home (which, even if they had spousal partners, still fell almost exclusively to women). In conclusion, although the participants seem to have experienced some increase in economic empowerment, largely due to skills, knowledge, and qualifications gained at the post-primary level, numerous barriers prevented them from maximizing their capabilities and making significant gains in empowerment. There is need, in addition to providing education (primary, secondary, and tertiary) to girls, to address systemic gender inequalities that mitigate against women’s empowerment, as well as opportunities and freedom for women to come together and demand fair pay, reasonable working conditions, and benefits, freedom from gender-based harassment and assault in the workplace, as well as advocate for equal distribution of domestic work as a cultural change.

Keywords: girls' post-primary education, women's empowerment, uganda, employment

Procedia PDF Downloads 143
206 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs

Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel

Abstract:

Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.

Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management

Procedia PDF Downloads 162
205 Role of Toll Like Receptor-2 in Female Genital Tuberculosis Disease Infection and Its Severity

Authors: Swati Gautam, Salman Akhtar, S. P. Jaiswar, Amita Jain

Abstract:

Background: FGTB is now a major global health problem mostly in developing countries including India. In humans, Mycobacterium Tuberculosis (M.tb) is a causating agent of infection. High index of suspicion is required for early diagnosis due to asymptomatic presentation of FGTB disease. In macrophages Toll Like Receptor-2 (TLR-2) is one which mediated host’s immune response to M.tb. The expression of TLR-2 on macrophages is important to determine the fate of innate immune responses to M.tb. TLR-2 have two work. First its high expression on macrophages worsen the outer of infection and another side, it maintains M.tb to its dormant stage avoids activation of M.tb from latent phase. Single Nucleotide Polymorphism (SNP) of TLR-2 gene plays an important role in susceptibility to TB among different populations and subsequently, in the development of infertility. Methodology: This Case-Control study was done in the Department of Obs and Gynae and Department of Microbiology at King George’s Medical University, U.P, Lucknow, India. Total 300 subjects (150 Cases and 150 Controls) were enrolled in the study. All subjects were enrolled only after fulfilling the given inclusion and exclusion criteria. Inclusion criteria: Age 20-35 years, menstrual-irregularities, positive on Acid-Fast Bacilli (AFB), TB-PCR, (LJ/MGIT) culture in Endometrial Aspiration (EA). Exclusion criteria: Koch’s active, on ATT, PCOS, and Endometriosis fibroid women, positive on Gonococal and Chlamydia. Blood samples were collected in EDTA tubes from cases and healthy control women (HCW) and genomic DNA extraction was carried out by salting-out method. Genotyping of TLR2 genetic variants (Arg753Gln and Arg677Trp) were performed by using single amplification refractory mutation system (ARMS) PCR technique. PCR products were analyzed by electrophoresis on 1.2% agarose gel and visualized by gel-doc. Statistical analysis of the data was performed using the SPSS 16.3 software and computing odds ratio (OR) with 95% CI. Linkage Disequiliribium (LD) analysis was done by SNP stats online software. Results: In TLR-2 (Arg753Gln) polymorphism significant risk of FGTB observed with GG homozygous mutant genotype (OR=13, CI=0.71-237.7, p=0.05), AG heterozygous mutant genotype (OR=13.7, CI=0.76-248.06, p=0.03) however, G allele (OR=1.09, CI=0.78-1.52, p=0.67) individually was not associated with FGTB. In TLR-2 (Arg677Trp) polymorphism a significant risk of FGTB observed with TT homozygous mutant genotype (OR= 0.020, CI=0.001-0.341, p < 0.001), CT heterozygous mutant genotype (OR=0.53, CI=0.33-0.86, p=0.014) and T allele (OR=0.463, CI=0.32-0.66, p < 0.001). TT mutant genotype was only found in FGTB cases and frequency of CT heterozygous more in control group as compared to FGTB group. So, CT genotype worked as protective mutation for FGTB susceptibility group. In haplotype analysis of TLR-2 genetic variants, four possible combinations, i.e. (G-T, A-C, G-C, and A-T) were obtained. The frequency of haplotype A-C was significantly higher in FGTB cases (0.32). Control group did not show A-C haplotype and only found in FGTB cases. Conclusion: In conclusion, study showed a significant association with both genetic variants of TLR-2 of FGTB disease. Moreover, the presence of specific associated genotype/alleles suggest the possibility of disease severity and clinical approach aimed to prevent extensive damage by disease and also helpful for early detection of disease.

Keywords: ARMS, EDTA, FGTB, TLR

Procedia PDF Downloads 298
204 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens

Authors: Cynthia Adlerstein

Abstract:

Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.

Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher

Procedia PDF Downloads 352
203 The Use of Modern Technologies and Computers in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar MehrAfarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: Iran, sistan, archaeological surveys, computer use, modern technologies

Procedia PDF Downloads 74
202 Climate Change Scenario Phenomenon in Malaysia: A Case Study in MADA Area

Authors: Shaidatul Azdawiyah Abdul Talib, Wan Mohd Razi Idris, Liew Ju Neng, Tukimat Lihan, Muhammad Zamir Abdul Rasid

Abstract:

Climate change has received great attention worldwide due to the impact of weather causing extreme events. Rainfall and temperature are crucial weather components associated with climate change. In Malaysia, increasing temperatures and changes in rainfall distribution patterns lead to drought and flood events involving agricultural areas, especially rice fields. Muda Agricultural Development Authority (MADA) is the largest rice growing area among the 10 granary areas in Malaysia and has faced floods and droughts in the past due to changing climate. Changes in rainfall and temperature patter affect rice yield. Therefore, trend analysis is important to identify changes in temperature and rainfall patterns as it gives an initial overview for further analysis. Six locations across the MADA area were selected based on the availability of meteorological station (MetMalaysia) data. Historical data (1991 to 2020) collected from MetMalaysia and future climate projection by multi-model ensemble of climate model from CMIP5 (CNRM-CM5, GFDL-CM3, MRI-CGCM3, NorESM1-M and IPSL-CM5A-LR) have been analyzed using Mann-Kendall test to detect the time series trend, together with standardized precipitation anomaly, rainfall anomaly index, precipitation concentration index and temperature anomaly. Future projection data were analyzed based on 3 different periods; early century (2020 – 2046), middle century (2047 – 2073) and late-century (2074 – 2099). Results indicate that the MADA area does encounter extremely wet and dry conditions, leading to drought and flood events in the past. The Mann-Kendall (MK) trend analysis test discovered a significant increasing trend (p < 0.05) in annual rainfall (z = 0.40; s = 15.12) and temperature (z = 0.61; s = 0.04) during the historical period. Similarly, for both RCP 4.5 and RCP 8.5 scenarios, a significant increasing trend (p < 0.05) was found for rainfall (RCP 4.5: z = 0.15; s = 2.55; RCP 8.5: z = 0.41; s = 8.05;) and temperature (RCP 4.5: z = 0.84; s = 0.02; RCP 8.5: z = 0.94; s = 0.05). Under the RCP 4.5 scenario, the average temperature is projected to increase up to 1.6 °C in early century, 2.0 °C in the middle century and 2.4 °C in the late century. In contrast, under RCP 8.5 scenario, the average temperature is projected to increase up to 1.8 °C in the early century, 3.1 °C in the middle century and 4.3 °C in late century. Drought is projected to occur in 2038 and 2043 (early century); 2052 and 2069 (middle century); and 2095, 2097 to 2099 (late century) under RCP 4.5 scenario. As for RCP 8.5 scenario, drought is projected to occur in 2021, 2031 and 2034 (early century); and 2069 (middle century). No drought is projected to occur in the late century under the RCP 8.5 scenario. Thus, this information can be used for the analysis of the impact of climate change scenarios on rice growth and yield besides other crops found in MADA area. Additionally, this study, it would be helpful for researchers and decision-makers in developing applicable adaptation and mitigation strategies to reduce the impact of climate change.

Keywords: climate projection, drought, flood, rainfall, RCP 4.5, RCP 8.5, temperature

Procedia PDF Downloads 73
201 The Effects of Lithofacies on Oil Enrichment in Lucaogou Formation Fine-Grained Sedimentary Rocks in Santanghu Basin, China

Authors: Guoheng Liu, Zhilong Huang

Abstract:

For more than the past ten years, oil and gas production from marine shale such as the Barnett shale. In addition, in recent years, major breakthroughs have also been made in lacustrine shale gas exploration, such as the Yanchang Formation of the Ordos Basin in China. Lucaogou Formation shale, which is also lacustrine shale, has also yielded a high production in recent years, for wells such as M1, M6, and ML2, yielding a daily oil production of 5.6 tons, 37.4 tons and 13.56 tons, respectively. Lithologic identification and classification of reservoirs are the base and keys to oil and gas exploration. Lithology and lithofacies obviously control the distribution of oil and gas in lithological reservoirs, so it is of great significance to describe characteristics of lithology and lithofacies of reservoirs finely. Lithofacies is an intrinsic property of rock formed under certain conditions of sedimentation. Fine-grained sedimentary rocks such as shale formed under different sedimentary conditions display great particularity and distinctiveness. Hence, to our best knowledge, no constant and unified criteria and methods exist for fine-grained sedimentary rocks regarding lithofacies definition and classification. Consequently, multi-parameters and multi-disciplines are necessary. A series of qualitative descriptions and quantitative analysis were used to figure out the lithofacies characteristics and its effect on oil accumulation of Lucaogou formation fine-grained sedimentary rocks in Santanghu basin. The qualitative description includes core description, petrographic thin section observation, fluorescent thin-section observation, cathode luminescence observation and scanning electron microscope observation. The quantitative analyses include X-ray diffraction, total organic content analysis, ROCK-EVAL.II Methodology, soxhlet extraction, porosity and permeability analysis and oil saturation analysis. Three types of lithofacies were mainly well-developed in this study area, which is organic-rich massive shale lithofacies, organic-rich laminated and cloddy hybrid sedimentary lithofacies and organic-lean massive carbonate lithofacies. Organic-rich massive shale lithofacies mainly include massive shale and tuffaceous shale, of which quartz and clay minerals are the major components. Organic-rich laminated and cloddy hybrid sedimentary lithofacies contain lamina and cloddy structure. Rocks from this lithofacies chiefly consist of dolomite and quartz. Organic-lean massive carbonate lithofacies mainly contains massive bedding fine-grained carbonate rocks, of which fine-grained dolomite accounts for the main part. Organic-rich massive shale lithofacies contain the highest content of free hydrocarbon and solid organic matter. Moreover, more pores were developed in organic-rich massive shale lithofacies. Organic-lean massive carbonate lithofacies contain the lowest content solid organic matter and develop the least amount of pores. Organic-rich laminated and cloddy hybrid sedimentary lithofacies develop the largest number of cracks and fractures. To sum up, organic-rich massive shale lithofacies is the most favorable type of lithofacies. Organic-lean massive carbonate lithofacies is impossible for large scale oil accumulation.

Keywords: lithofacies classification, tuffaceous shale, oil enrichment, Lucaogou formation

Procedia PDF Downloads 213
200 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method

Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa

Abstract:

The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.

Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop

Procedia PDF Downloads 297
199 Inferring Influenza Epidemics in the Presence of Stratified Immunity

Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley

Abstract:

Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.

Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity

Procedia PDF Downloads 257
198 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT

Procedia PDF Downloads 270
197 The Relationship between 21st Century Digital Skills and the Intention to Start a Digit Entrepreneurship

Authors: Kathrin F. Schneider, Luis Xavier Unda Galarza

Abstract:

In our modern world, few are the areas that are not permeated by digitalization: we use digital tools for work, study, entertainment, and daily life. Since technology changes rapidly, skills must adapt to the new reality, which gives a dynamic dimension to the set of skills necessary for people's academic, professional, and personal success. The concept of 21st-century digital skills, which includes skills such as collaboration, communication, digital literacy, citizenship, problem-solving, critical thinking, interpersonal skills, creativity, and productivity, have been widely discussed in the literature. Digital transformation has opened many economic opportunities for entrepreneurs for the development of their products, financing possibilities, and product distribution. One of the biggest advantages is the reduction in cost for the entrepreneur, which has opened doors not only for the entrepreneur or the entrepreneurial team but also for corporations through intrapreneurship. The development of students' general literacy level and their digital competencies is crucial for improving the effectiveness and efficiency of the learning process, as well as for students' adaptation to the constantly changing labor market. The digital economy allows a free substantial increase in the supply share of conditional and also innovative products; this is mainly achieved through 5 ways to reduce costs according to the conventional digital economy: search costs, replication, transport, tracking, and verification. Digital entrepreneurship worldwide benefits from such achievements. There is an expansion and democratization of entrepreneurship thanks to the use of digital technologies. The digital transformation that has been taking place in recent years is more challenging for developing countries, as they have fewer resources available to carry out this transformation while offering all the necessary support in terms of cybersecurity and educating their people. The degree of digitization (use of digital technology) in a country and the levels of digital literacy of its people often depend on the economic level and situation of the country. Telefónica's Digital Life Index (TIDL) scores are strongly correlated with country wealth, reflecting the greater resources that richer countries can contribute to promoting "Digital Life". According to the Digitization Index, Ecuador is in the group of "emerging countries", while Chile, Colombia, Brazil, Argentina, and Uruguay are in the group of "countries in transition". According to Herrera Espinoza et al. (2022), there are startups or digital ventures in Ecuador, especially in certain niches, but many of the ventures do not exceed six months of creation because they arise out of necessity and not out of the opportunity. However, there is a lack of relevant research, especially empirical research, to have a clearer vision. Through a self-report questionnaire, the digital skills of students will be measured in an Ecuadorian private university, according to the skills identified as the six 21st-century skills. The results will be put to the test against the variable of the intention to start a digital venture measured using the theory of planned behavior (TPB). The main hypothesis is that high digital competence is positively correlated with the intention to start digital entrepreneurship.

Keywords: new literacies, digital transformation, 21st century skills, theory of planned behavior, digital entrepreneurship

Procedia PDF Downloads 96
196 Scanning Transmission Electron Microscopic Analysis of Gamma Ray Exposed Perovskite Solar Cells

Authors: Aleksandra Boldyreva, Alexander Golubnichiy, Artem Abakumov

Abstract:

Various perovskite materials have surprisingly high resistance towards high-energy electrons, protons, and hard ionization, such as X-rays and gamma-rays. Superior radiation hardness makes a family of perovskite semiconductors an attractive candidate for single- and multijunction solar cells for the space environment and as X-ray and gamma-ray detectors. One of the methods to study the radiation hardness of different materials is by exposing them to gamma photons with high energies (above 500 keV) Herein, we have explored the recombination dynamics and defect concentration of a mixed cation mixed halide perovskite Cs0.17FA0.83PbI1.8Br1.2 with 1.74 eV bandgap after exposure to a gamma-ray source (2.5 Gy/min). We performed an advanced STEM EDX analysis to reveal different types of defects formed during gamma exposure. It was found that 10 kGy dose results in significant improvement of perovskite crystallinity and homogeneous distribution of I ions. While the absorber layer withstood gamma exposure, the hole transport layer (PTAA) as well as indium tin oxide (ITO) were significantly damaged, which increased the interface recombination rate and reduction of fill factor in solar cells. Thus, STEM analysis is a powerful technique that can reveal defects formed by gamma exposure in perovskite solar cells. Methods: Data will be collected from perovskite solar cells (PSCs) and thin films exposed to gamma ionisator. For thin films 50 μL of the Cs0.17FA0.83PbI1.8Br1.2 solution in DMF was deposited (dynamically) at 3000 rpm followed by quenching with 100 μL of ethyl acetate (dropped 10 sec after perovskite precursor) applied at the same spin-coating frequency. The deposited Cs0.17FA0.83PbI1.8Br1.2 films were annealed for 10 min at 100 °C, which led to the development of a dark brown color. For the solar cells, 10% suspension of SnO2 nanoparticles (Alfa Aesar) was deposited at 4000 rpm, followed by annealing on air at 170 ˚C for 20 min. Next, samples were introduced into a nitrogen glovebox for the deposition of all remaining layers. Perovskite film was applied in the same way as in thin films described earlier. Solution of poly-triaryl amine PTAA (Sigma Aldrich) (4 mg in chlorobenzene) was applied at 1000 rpm atop of perovskite layer. Next, 30 nm of VOx was deposited atop the PTAA layer on the whole sample surface using the physical vapor deposition (PVD) technique. Silver electrodes (100 nm) were evaporated in a high vacuum (10-6 mbar) through a shadow mask, defining the active area of each device as ~0.16 cm2. The prepared samples (thin films and solar cells) were packed in Al lamination foil inside the argon glove box. The set of samples consisted of 6 thin films and 6 solar cells, which were exposed to 6, 10, and 21 kGy (2 samples per dose) with 137Cs gamma-ray source (E = 662 keV) with a dose rate of 2.5 Gy/min. The exposed samples will be studied on a focused ion beam (FIB) on a dual-beam scanning electron microscope from ThermoFisher, the Helios G4 Plasma FIB Uxe, operating with a xenon plasma.

Keywords: perovskite solar cells, transmission electron microscopy, radiation hardness, gamma irradiation

Procedia PDF Downloads 12
195 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 112
194 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 136
193 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 343
192 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy

Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather

Abstract:

Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.

Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging

Procedia PDF Downloads 245
191 Advanced Bio-Fuels for Biorefineries: Incorporation of Waste Tires and Calcium-Based Catalysts to the Pyrolysis of Biomass

Authors: Alberto Veses, Olga Sanhauja, María Soledad Callén, Tomás García

Abstract:

The appropriate use of renewable sources emerges as a decisive point to minimize the environmental impact caused by fossil fuels use. Particularly, the use of lignocellulosic biomass becomes one of the best promising alternatives since it is the only carbon-containing renewable source that can produce bioproducts similar to fossil fuels and it does not compete with food market. Among all the processes that can valorize lignocellulosic biomass, pyrolysis is an attractive alternative because it is the only thermochemical process that can produce a liquid biofuel (bio-oil) in a simple way and solid and gas fractions that can be used as energy sources to support the process. However, in order to incorporate bio-oils in current infrastructures and further process in future biorefineries, their quality needs to be improved. Introducing different low-cost catalysts and/or incorporating different polymer residues to the process are some of the new, simple and low-cost strategies that allow the user to directly obtain advanced bio-oils to be used in future biorefineries in an economic way. In this manner, from previous thermogravimetric analyses, local agricultural wastes such as grape seeds (GS) were selected as lignocellulosic biomass while, waste tires (WT) were selected as polymer residue. On the other hand, CaO was selected as low-cost catalyst based on previous experiences by the group. To reach this aim, a specially-designed fixed bed reactor using N₂ as a carrier gas was used. This reactor has the peculiarity to incorporate a vertical mobile liner that allows the user to introduce the feedstock in the oven once the selected temperature (550 ºC) is reached, ensuring higher heating rates needed for the process. Obtaining a well-defined phase distribution in the resulting bio-oil is crucial to ensure the viability to the process. Thus, once experiments were carried out, not only a well-defined two layers was observed introducing several mixtures (reaching values up to 40 wt.% of WT) but also, an upgraded organic phase, which is the one considered to be processed in further biorefineries. Radical interactions between GS and WT released during the pyrolysis process and dehydration reactions enhanced by CaO can promote the formation of better-quality bio-oils. The latter was reflected in a reduction of water and oxygen content of bio-oil and hence, a substantial increase of its heating value and its stability. Moreover, not only sulphur content was reduced from solely WT pyrolysis but also potential and negative issues related to a strong acidic environment of conventional bio-oils were minimized due to its basic pH and lower total acid numbers. Therefore, acidic compounds obtained in the pyrolysis such as CO₂-like substances can react with the CaO and minimize acidic problems related to lignocellulosic bio-oils. Moreover, this CO₂ capture promotes H₂ production from water gas shift reaction favoring hydrogen-transfer reactions, improving the final quality of the bio-oil. These results show the great potential of grapes seeds to carry out the catalytic co-pyrolysis process with different plastic residues in order to produce a liquid bio-oil that can be considered as a high-quality renewable vector.

Keywords: advanced bio-oils, biorefinery, catalytic co-pyrolysis of biomass and waste tires, lignocellulosic biomass

Procedia PDF Downloads 230
190 A Systematic Review of Antimicrobial Resistance in Fish and Poultry – Health and Environmental Implications for Animal Source Food Production in Egypt, Nigeria, and South Africa

Authors: Ekemini M. Okon, Reuben C. Okocha, Babatunde T. Adesina, Judith O. Ehigie, Babatunde M. Falana, Boluwape T. Okikiola

Abstract:

Antimicrobial resistance (AMR) has evolved to become a significant threat to global public health and food safety. The development of AMR in animals has been associated with antimicrobial overuse. In recent years, the number of antimicrobials used in food animals such as fish and poultry has escalated. It, therefore, becomes imperative to understand the patterns of AMR in fish and poultry and map out future directions for better surveillance efforts. This study used the Preferred Reporting Items for Systematic reviews and Meta-Analyses(PRISMA) to assess the trend, patterns, and spatial distribution for AMR research in Egypt, Nigeria, and South Africa. A literature search was conducted through the Scopus and Web of Science databases in which published studies on AMR between 1989 and 2021 were assessed. A total of 172 articles were relevant for this study. The result showed progressive attention on AMR studies in fish and poultry from 2018 to 2021 across the selected countries. The period between 2018 (23 studies) and 2021 (25 studies) showed a significant increase in AMR publications with a peak in 2019 (28 studies). Egypt was the leading exponent of AMR research (43%, n=74) followed by Nigeria (40%, n=69), then South Africa (17%, n=29). AMR studies in fish received relatively little attention across countries. The majority of the AMR studies were on poultry in Egypt (82%, n=61), Nigeria (87%, n=60), and South Africa (83%, n=24). Further, most of the studies were on Escherichia and Salmonella species. Antimicrobials frequently researched were ampicillin, erythromycin, tetracycline, trimethoprim, chloramphenicol, and sulfamethoxazole groups. Multiple drug resistance was prevalent, as demonstrated by antimicrobial resistance patterns. In poultry, Escherichia coli isolates were resistant to cefotaxime, streptomycin, chloramphenicol, enrofloxacin, gentamycin, ciprofloxacin, oxytetracycline, kanamycin, nalidixic acid, tetracycline, trimethoprim/sulphamethoxazole, erythromycin, and ampicillin. Salmonella enterica serovars were resistant to tetracycline, trimethoprim/sulphamethoxazole, cefotaxime, and ampicillin. Staphylococcusaureus showed high-level resistance to streptomycin, kanamycin, erythromycin, cefoxitin, trimethoprim, vancomycin, ampicillin, and tetracycline. Campylobacter isolates were resistant to ceftriaxone, erythromycin, ciprofloxacin, tetracycline, and nalidixic acid at varying degrees. In fish, Enterococcus isolates showed resistance to penicillin, ampicillin, chloramphenicol, vancomycin, and tetracycline but sensitive to ciprofloxacin, erythromycin, and rifampicin. Isolated strains of Vibrio species showed sensitivity to florfenicol and ciprofloxacin, butresistance to trimethoprim/sulphamethoxazole and erythromycin. Isolates of Aeromonas and Pseudomonas species exhibited resistance to ampicillin and amoxicillin. Specifically, Aeromonashydrophila isolates showed sensitivity to cephradine, doxycycline, erythromycin, and florfenicol. However, resistance was also exhibited against augmentinandtetracycline. The findings constitute public and environmental health threats and suggest the need to promote and advance AMR research in other countries, particularly those on the global hotspot for antimicrobial use.

Keywords: antibiotics, antimicrobial resistance, bacteria, environment, public health

Procedia PDF Downloads 188
189 Early Predictive Signs for Kasai Procedure Success

Authors: Medan Isaeva, Anna Degtyareva

Abstract:

Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.

Keywords: biliary atresia, kasai operation, prognostic model, native liver survival

Procedia PDF Downloads 50
188 About the State of Students’ Career Guidance in the Conditions of Inclusive Education in the Republic of Kazakhstan

Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip

Abstract:

Over the years of independence, Kazakhstan has not only ratified international documents regulating the rights of children to Inclusive education, but also developed its own inclusive educational policy. Along with this, the state pays particular attention to high school students' preparedness for professional self-determination. However, a number of problematic issues in this field have been revealed, such as the lack of systemic mechanisms coordinating stakeholders’ actions in preparing schoolchildren for a conscious choice of in-demand profession, meeting their individual capabilities and special educational needs (SEN). The analysis of the state’s current situation indicates school graduates’ adaptation to the labor market does not meet existing demands of the society. According to the Ministry of Labor and Social Protection of the Population of the Republic of Kazakhstan, about 70 % of Kazakhstani school graduates find themselves difficult to choose a profession, 87 % of schoolchildren make their career choice under the influence of parents and school teachers, 90 % of schoolchildren and their parents have no idea about the most popular professions on the market. The results of the study conducted by KorlanSyzdykova in 2016 indicated the urgent need of Kazakhstani school graduates in obtaining extensive information about in- demand professions and receiving professional assistance in choosing a profession in accordance with their individual skills, abilities, and preferences. The results of the survey, conducted by Information and Analytical Center among heads of colleges in 2020, showed that despite significant steps in creating conditions for students with SEN, they face challenges in studying because of poor career guidance provided to them in schools. The results of the study, conducted by the Center for Inclusive Education of the National Academy of Education named after Y. Altynsarin in the state’s general education schools in 2021, demonstrated the lack of career guidance, pedagogical and psychological support for children with SEN. To investigate these issues, the further study was conducted to examine the state of students’ career guidance and socialization, taking into account their SEN. The hypothesis of this study proposed that to prepare school graduates for a conscious career choice, school teachers and specialists need to develop their competencies in early identification of students' interests, inclinations, SEN and ensure necessary support for them. The state’s 5 regions were involved in the study according to the geographical location. The triangulation approach was utilized to ensure the credibility and validity of research findings, including both theoretical (analysis of existing statistical data, legal documents, results of previous research) and empirical (school survey for students, interviews with parents, teachers, representatives of school administration) methods. The data were analyzed independently and compared to each other. The survey included questions related to provision of pedagogical support for school students in making their career choice. Ethical principles were observed in the process of developing the methodology, collecting, analyzing the data and distributing the results. Based on the results, methodological recommendations on students’ career guidance for school teachers and specialists were developed, taking into account the former’s individual capabilities and SEN.

Keywords: career guidance, children with special educational needs, inclusive education, Kazakhstan

Procedia PDF Downloads 161
187 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice

Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson

Abstract:

As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.

Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool

Procedia PDF Downloads 168
186 Investigating Links in Achievement and Deprivation (ILiAD): A Case Study Approach to Community Differences

Authors: Ruth Leitch, Joanne Hughes

Abstract:

This paper presents the findings of a three-year government-funded study (ILiAD) that aimed to understand the reasons for differential educational achievement within and between socially and economically deprived areas in Northern Ireland. Previous international studies have concluded that there is a positive correlation between deprivation and underachievement. Our preliminary secondary data analysis suggested that the factors involved in educational achievement within multiple deprived areas may be more complex than this, with some areas of high multiple deprivation having high levels of student attainment, whereas other less deprived areas demonstrated much lower levels of student attainment, as measured by outcomes on high stakes national tests. The study proposed that no single explanation or disparate set of explanations could easily account for the linkage between levels of deprivation and patterns of educational achievement. Using a social capital perspective that centralizes the connections within and between individuals and social networks in a community as a valuable resource for educational achievement, the ILiAD study involved a multi-level case study analysis of seven community sites in Northern Ireland, selected on the basis of religious composition (housing areas are largely segregated by religious affiliation), measures of multiple deprivation and differentials in educational achievement. The case study approach involved three (interconnecting) levels of qualitative data collection and analysis - what we have termed Micro (or community/grassroots level) understandings, Meso (or school level) explanations and Macro (or policy/structural) factors. The analysis combines a statistical mapping of factors with qualitative, in-depth data interpretation which, together, allow for deeper understandings of the dynamics and contributory factors within and between the case study sites. Thematic analysis of the qualitative data reveals both cross-cutting factors (e.g. demographic shifts and loss of community, place of the school in the community, parental capacity) and analytic case studies of explanatory factors associated with each of the community sites also permit a comparative element. Issues arising from the qualitative analysis are classified either as drivers or inhibitors of educational achievement within and between communities. Key issues that are emerging as inhibitors/drivers to attainment include: the legacy of the community conflict in Northern Ireland, not least in terms of inter-generational stress, related with substance abuse and mental health issues; differing discourses on notions of ‘community’ and ‘achievement’ within/between community sites; inter-agency and intra-agency levels of collaboration and joined-up working; relationship between the home/school/community triad and; school leadership and school ethos. At this stage, the balance of these factors can be conceptualized in terms of bonding social capital (or lack of it) within families, within schools, within each community, within agencies and also bridging social capital between the home/school/community, between different communities and between key statutory and voluntary organisations. The presentation will outline the study rationale, its methodology, present some cross-cutting findings and use an illustrative case study of the findings from a community site to underscore the importance of attending to community differences when trying to engage in research to understand and improve educational attainment for all.

Keywords: educational achievement, multiple deprivation, community case studies, social capital

Procedia PDF Downloads 377
185 Monitoring of Formaldehyde over Punjab Pakistan Using Car Max-Doas and Satellite Observation

Authors: Waqas Ahmed Khan, Faheem Khokhaar

Abstract:

Air pollution is one of the main perpetrators of climate change. GHGs cause melting of glaciers and cause change in temperature and heavy rain fall many gasses like Formaldehyde is not direct precursor that damage ozone like CO2 or Methane but Formaldehyde (HCHO) form glyoxal (CHOCHO) that has effect on ozone. Countries around the globe have unique air quality monitoring protocols to describe local air pollution. Formaldehyde is a colorless, flammable, strong-smelling chemical that is used in building materials and to produce many household products and medical preservatives. Formaldehyde also occurs naturally in the environment. It is produced in small amounts by most living organisms as part of normal metabolic processes. Pakistan lacks the monitoring facilities on larger scale to measure the atmospheric gasses on regular bases. Formaldehyde is formed from Glyoxal and effect mountain biodiversity and livelihood. So its monitoring is necessary in order to maintain and preserve biodiversity. Objective: Present study is aimed to measure atmospheric HCHO vertical column densities (VCDs) obtained from ground-base and compute HCHO data in Punjab and elevated areas (Rawalpindi & Islamabad) by satellite observation during the time period of 2014-2015. Methodology: In order to explore the spatial distributing of H2CO, various fields campaigns including international scientist by using car Max-Doas. Major focus was on the cities along national highways and industrial region of Punjab Pakistan. Level 2 data product of satellite instruments OMI retrieved by differential optical absorption spectroscopy (DOAS) technique are used. Spatio-temporal distribution of HCHO column densities over main cities and region of Pakistan has been discussed. Results: Results show the High HCHO column densities exceeding permissible limit over the main cities of Pakistan particularly the areas with rapid urbanization and enhanced economic growth. The VCDs value over elevated areas of Pakistan like Islamabad, Rawalpindi is around 1.0×1016 to 34.01×1016 Molecules’/cm2. While Punjab has values revolving around the figure 34.01×1016. Similarly areas with major industrial activity showed high amount of HCHO concentrations. Tropospheric glyoxal VCDs were found to be 4.75 × 1015 molecules/cm2. Conclusion: Results shows that monitoring site surrounded by Margalla hills (Islamabad) have higher concentrations of Formaldehyde. Wind data shows that industrial areas and areas having high economic growth have high values as they provide pathways for transmission of HCHO. Results obtained from this study would help EPA, WHO and air protection departments in order to monitor air quality and further preservation and restoration of mountain biodiversity.

Keywords: air quality, formaldehyde, Max-Doas, vertical column densities (VCDs), satellite instrument, climate change

Procedia PDF Downloads 207
184 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites

Authors: Mohammad M. Khan, Pankaj Agarwal

Abstract:

The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.

Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM

Procedia PDF Downloads 145
183 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies

Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar

Abstract:

Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.

Keywords: microfluidic device, minitab, statistical optimization, response surface methodology

Procedia PDF Downloads 59
182 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 286
181 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 123
180 SWOT Analysis on the Prospects of Carob Use in Human Nutrition: Crete, Greece

Authors: Georgios A. Fragkiadakis, Antonia Psaroudaki, Theodora Mouratidou, Eirini Sfakianaki

Abstract:

Research: Within the project "Actions for the optimal utilization of the potential of carob in the Region of Crete" which is financed-supervised by the Region, with collaboration of Crete University and Hellenic Mediterranean University, a SWOT (strengths, weaknesses, opportunities, threats) survey was carried out, to evaluate the prospects of carob in human nutrition, in Crete. Results and conclusions: 1). Strengths: There exists a local production of carob for human consumption, based on international reports, and local-product reports. The data on products in the market (over 100 brands of carob food), indicates a sufficiency of carob materials offered in Crete. The variety of carob food products retailed in Crete indicates a strong demand-production-consumption trend. There is a stable number (core) of businesses that invest significantly (Creta carob, Cretan mills, etc.). The great majority of the relevant food stores (bakery, confectionary etc.) do offer carob products. The presence of carob products produced in Crete is strong on the internet (over 20 main professionally designed websites). The promotion of the carob food-products is based on their variety and on a few historical elements connected with the Cretan diet. 2). Weaknesses: The international prices for carob seed affect the sector; the seed had an international price of €20 per kg in 2021-22 and fell to €8 in 2022, causing losses to carob traders. The local producers do not sort the carobs they deliver for processing, causing 30-40% losses of the product in the industry. The occasional high price triggers the collection of degraded raw material; large losses may emerge due to the action of insects. There are many carob trees whose fruits are not collected, e.g. in Apokoronas, Chania. The nutritional and commercial value of the wild carob fruits is very low. Carob trees-production is recorded by Greek statistical services as "other cultures" in combination with prickly pear i.e., creating difficulties in retrieving data. The percentage of carob used for human nutrition, in contrast to animal feeding, is not known. The exact imports of carob are not closely monitored. We have no data on the recycling of carob by-products in Crete. 3). Opportunities: The development of a culture of respect for carob trade may improve professional relations in the sector. Monitoring carob market and connecting production with retailing-industry needs may allow better market-stability. Raw material evaluation procedures may be implemented to maintain carob value-chain. The state agricultural services may be further involved in carob-health protection. The education of farmers on carob cultivation/management, can improve the quality of the product. The selection of local productive varieties, may improve the sustainability of the culture. Connecting the consumption of carob with health-food products, may create added value in the sector. The presence and extent of wild carob threes in Crete, represents, potentially, a target for grafting. 4). Threats: The annual fluctuation of carob yield challenges the programming of local food industry activities. Carob is a forest species also - there is danger of wrong classification of crops as forest areas, where land ownership is not clear.

Keywords: human nutrition, carob food, SWOT analysis, crete, greece

Procedia PDF Downloads 88
179 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza

Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue

Abstract:

Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.

Keywords: COVID-19, Fastai, influenza, transfer network

Procedia PDF Downloads 137