Search results for: susceptibility weighted
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1061

Search results for: susceptibility weighted

101 Alkali Activation of Fly Ash, Metakaolin and Slag Blends: Fresh and Hardened Properties

Authors: Weiliang Gong, Lissa Gomes, Lucile Raymond, Hui Xu, Werner Lutze, Ian L. Pegg

Abstract:

Alkali-activated materials, particularly geopolymers, have attracted much interest in academia. Commercial applications are on the rise, as well. Geopolymers are produced typically by a reaction of one or two aluminosilicates with an alkaline solution at room temperature. Fly ash is an important aluminosilicate source. However, using low-Ca fly ash, the byproduct of burning hard or black coal reacts and sets slowly at room temperature. The development of mechanical durability, e.g., compressive strength, is slow as well. The use of fly ashes with relatively high contents ( > 6%) of unburned carbon, i.e., high loss on ignition (LOI), is particularly disadvantageous as well. This paper will show to what extent these impediments can be mitigated by mixing the fly ash with one or two more aluminosilicate sources. The fly ash used here is generated at the Orlando power plant (Florida, USA). It is low in Ca ( < 1.5% CaO) and has a high LOI of > 6%. The additional aluminosilicate sources are metakaolin and blast furnace slag. Binary fly ash-metakaolin and ternary fly ash-metakaolin-slag geopolymers were prepared. Properties of geopolymer pastes before and after setting have been measured. Fresh mixtures of aluminosilicates with an alkaline solution were studied by Vicat needle penetration, rheology, and isothermal calorimetry up to initial setting and beyond. The hardened geopolymers were investigated by SEM/EDS and the compressive strength was measured. Initial setting (fluid to solid transition) was indicated by a rapid increase in yield stress and plastic viscosity. The rheological times of setting were always smaller than the Vicat times of setting. Both times of setting decreased with increasing replacement of fly ash with blast furnace slag in a ternary fly ash-metakaolin-slag geopolymer system. As expected, setting with only Orlando fly ash was the slowest. Replacing 20% fly ash with metakaolin shortened the set time. Replacing increasing fractions of fly ash in the binary system by blast furnace slag (up to 30%) shortened the time of setting even further. The 28-day compressive strength increased drastically from < 20 MPa to 90 MPa. The most interesting finding relates to the calorimetric measurements. The use of two or three aluminosilicates generated significantly more heat (20 to 65%) than the calculated from the weighted sum of the individual aluminosilicates. This synergetic heat contributes or may be responsible for most of the increase of compressive strength of our binary and ternary geopolymers. The synergetic heat effect may be also related to increased incorporation of calcium in sodium aluminosilicate hydrate to form a hybrid (N,C)A-S-H) gel. The time of setting will be correlated with heat release and maximum heat flow.

Keywords: alkali-activated materials, binary and ternary geopolymers, blends of fly ash, metakaolin and blast furnace slag, rheology, synergetic heats

Procedia PDF Downloads 116
100 Protective Effect of Ginger Root Extract on Dioxin-Induced Testicular Damage in Rats

Authors: Hamid Abdulroof Saleh

Abstract:

Background: Dioxins are one of the most widely distributed environmental pollutants. Dioxins consist of feedstock during the preparation of some industries, such as the paper industry as they can be produced in the atmosphere during the process of burning garbage and waste, especially medical waste. Dioxins can be found in the adipose tissues of animals in the food chain as well as in human breast milk. 2,3,7,8-Tetrachlorodibenzo-pdioxin (TCDD) is the most toxic component of a large group of dioxins. Humans are exposed to TCDD through contaminated food items like meat, fish, milk products, eggs etc. Recently, natural formulations relating to reducing or eliminating TCDD toxicity have been in focus. Ginger rhizome (Zingiber officinale R., family: Zingiberaceae), is used worldwide as a spice. Both antioxidative and androgenic activity of Z. officinale was reported in animal models. Researchers showed that ginger oil has dominative protective effect on DNA damage and might act as a scavenger of oxygen radical and might be used as an antioxidant. Aim of the work: The present study was undertaken to evaluate the toxic effect of TCDD on the structure and histoarchitecture of the testis and the protective role of co-administration of ginger root extract to prevent this toxicity. Materials & Methods: Male adult rats of Sprague-Dawley strain were assigned to four groups, eight rats in each; control group, dioxin treated group (given TCDD at the dose of 100 ng/kg Bwt/day by gavage), ginger treated group (given 50 mg/kg Bwt/day of ginger root extract by gavage), dioxin and ginger treated group (given TCDD at the dose of 100 ng/kg Bwt/day and 50 mg/kg Bwt/day of ginger root extract by gavages). After three weeks, rats were weighed and sacrificed where testis were removed and weighted. The testes were processed for routine paraffin embedding and staining. Tissue sections were examined for different morphometric and histopathological changes. Results: Dioxin administration showed a harmful effects in the body, testis weight and other morphometric parameters of the testis. In addition, it produced varying degrees of damage to the seminiferous tubules, which were shrunken and devoid of mature spermatids. The basement membrane was disorganized with vacuolization and loss of germinal cells. The co-administration of ginger root extract showed obvious improvement in the above changes and showed reversible morphometric and histopathological changes of the seminiferous tubules. Conclusion: Ginger root extract treatment in this study was successful in reversing all morphometric and histological changes of dioxin testicular damage. Therefore, it showed a protective effect on testis against dioxin toxicity.

Keywords: dioxin, ginger, rat, testis

Procedia PDF Downloads 418
99 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.

Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine

Procedia PDF Downloads 7
98 Antimicrobial Resistance of Acinetobacter baumannii in Veterinary Settings: A One Health Perspective from Punjab, Pakistan

Authors: Minhas Alam, Muhammad Hidayat Rasool, Mohsin Khurshid, Bilal Aslam

Abstract:

The genus Acinetobacter has emerged as a significant concern in hospital-acquired infections, particularly due to the versatility of Acinetobacter baumannii in causing nosocomial infections. The organism's remarkable metabolic adaptability allows it to thrive in various environments, including the environment, animals, and humans. However, the extent of antimicrobial resistance in Acinetobacter species from veterinary settings, especially in developing countries like Pakistan, remains unclear. This study aimed to isolate and characterize Acinetobacter spp. from veterinary settings in Punjab, Pakistan. A total of 2,230 specimens were collected, including 1,960 samples from veterinary settings (nasal and rectal swabs from dairy and beef cattle), 200 from the environment, and 70 from human clinical settings. Isolates were identified using routine microbiological procedures and confirmed by polymerase chain reaction (PCR). Antimicrobial susceptibility was determined by the disc diffusion method, and minimum inhibitory concentration (MIC) was measured by the micro broth dilution method. Molecular techniques, such as PCR and DNA sequencing, were used to screen for antimicrobial-resistant determinants. Genetic diversity was assessed using standard techniques. The results showed that the overall prevalence of A. baumannii in cattle was 6.63% (65/980). However, among cattle, a higher prevalence of A. baumannii was observed in dairy cattle, 7.38% (54/731), followed by beef cattle, 4.41% (11/249). Out of 65 A. baumannii isolates, the carbapenem resistance was found in 18 strains, i.e. 27.7%. The prevalence of A. baumannii in nasopharyngeal swabs was higher, i.e., 87.7% (57/65), as compared to rectal swabs, 12.3% (8/65). Class D β-lactamases genes blaOXA-23 and blaOXA-51 were present in all the CRAB from cattle. Among carbapenem-resistant isolates, 94.4% (17/18) were positive for class B β-lactamases gene blaIMP, whereas the blaNDM-1 gene was detected in only one isolate of A. baumannii. Among 70 clinical isolates of A. baumannii, 58/70 (82.9%) were positive for the blaOXA-23-like gene, and 87.1% (61/70) were CRAB isolates. Among all clinical isolates of A. baumannii, blaOXA-51-like gene was present. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 82.85% of clinical isolates. From the environmental settings, a total of 18 A. baumannii isolates were recovered; among these, 38.88% (7/18) strains showed carbapenem resistance. All environmental isolates of A. baumannii harbored class D β-lactamases genes, i.e., blaOXA-51 and blaOXA-23 were detected in 38.9% (7/18) isolates. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 38.88% of isolates. From environmental settings, 18 A. baumannii isolates were recovered, with 38.88% showing carbapenem resistance. All environmental isolates harbored blaOXA-51 and blaOXA-23 genes, with co-existence in 38.88% of isolates. MLST results showed ten different sequence types (ST) in clinical isolates, with ST 589 being the most common in carbapenem-resistant isolates. In veterinary isolates, ST2 was most common in CRAB isolates from cattle. Immediate control measures are needed to prevent the transmission of CRAB isolates among animals, the environment, and humans. Further studies are warranted to understand the mechanisms of antibiotic resistance spread and implement effective disease control programs.

Keywords: Acinetobacter baumannii, carbapenemases, drug resistance, MSLT

Procedia PDF Downloads 70
97 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 128
96 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India

Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari

Abstract:

The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.

Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya

Procedia PDF Downloads 72
95 Engineers 'Write' Job Description: Development of English for Specific Purposes (ESP)-Based Instructional Materials for Engineering Students

Authors: Marjorie Miguel

Abstract:

Globalization offers better career opportunities hence demands more competent professionals efficient for the job. With the transformation of the world industry from competition to collaboration coupled with the rapid development in the field of science and technology, engineers need not only to be technically proficient, but also multilingual-skilled: two characteristics that a global engineer possesses. English often serves as the global language between people from different cultures being the medium mostly used in international business. Ironically, most universities worldwide adapt engineering curriculum heavily built around the language of mathematics not realizing that the goal of an engineer is not only to create and design, but more importantly to promote his creations and designs to the general public through effective communication. This premise led to some developments in the teaching process of English subjects in the tertiary level which include the integration of the technical knowledge related to the area of specialization of the students in the English subjects that they are taking. This is also known as English for Specific Purposes. This study focused on the development of English for Specific Purposes-Based Instructional Materials for Engineering Students of Bulacan State University (BulSU). The materials were tailor-made in which the contents and structure were designed to meet the specific needs of the students as well as the industry. Based on the needs analysis, the needs of the students and the industry were determined to make the study descriptive in nature. The major respondents included fifty engineering students and ten professional engineers from selected institutions. The needs analysis was done and the results showed the common writing difficulties of the students and the writing skills needed among the engineers in the industry. The topics in the instructional materials were established after the needs analysis was conducted. Simple statistical treatment including frequency distribution, percentages, mean, standard deviation, and weighted mean were used. The findings showed that the greatest number of the respondents had an average proficiency rating in writing, and the much-needed skills that must be developed by the engineers are directly related to the preparation and presentation of technical reports about their projects, as well as to the different communications they transmit to their colleagues and superiors. The researcher undertook the following phases in the development of the instructional materials: a design phase, development phase, and evaluation phase. Evaluations are given by some college instructors about the instructional materials generally helped in its usefulness and significance making the study beneficial not only as a career enhancer for BulSU engineering students, but also creating the university one of the educational institutions ready for the new millennium.

Keywords: English for specific purposes, instructional materials, needs analysis, write (right) job description

Procedia PDF Downloads 239
94 Athletics and Academics: A Mixed Methods Enquiry on University/College Student Athletes' Experiences

Authors: Tshepang Tshube

Abstract:

The primary purpose of this study was to examine student-athletes’ experiences, particularly an in-depth account of balancing school and sport. The secondary objective was to assess student-athletes’ susceptibility to the effects of the “dumb-jock” stereotype threat and also determine the strength of athletic and academic identity as predicated by the extent to which stereotype is perceived by student-athletes. Sub-objectives are (a) examine support structures available for student-athletes in their respective academic institutions, (b) to establish the most effective ways to address student-athletes’ learning needs, (c) to establish crucial entourage members who play a pivotal role in student-athletes’ academic pursuits, (d) and unique and effective ways lecturers and coaches can contribute to student-athletes’ learning experiences. To achieve the above stated objectives, the study used a mixed methods approach. A total of 110 student-athletes from colleges and universities in Botswana completed an online survey that was followed by semi-structured interviews with eight student-athletes, and four coaches. The online survey assessed student-athletes’ demographic variables, measured athletic (AIMS), academic (modified from AIMS) identities, and perceived stereotype threat. Student-athletes reported a slightly higher academic identity (M=5.9, SD= .85) compared to athletic identity (M=5.4, SD=1.0). Student-athletes reported a moderate mean (M=3.6, SD=.82) just above the midpoint of the 7-point scale for stereotype threat. A univariate ANOVA was conducted to determine if there was any significant difference between university and college brackets in Botswana with regard to three variables: athletic identity, student identity and stereotype threat. The only significant difference was in the academic identity (Post Hoc-Tukey Student Identity: Bracket A < Bracket B, Bracket C) with Bracket A schools being the least athletically competitive. Bracket C and B are the most athletically competitive brackets in Botswana. Follow-up interviews with student-athletes and coaches were conducted. All interviews lasted an average of 55 minutes. Following all the interviews, all recordings were transcribed which is an obvious first step in qualitative data analysis process. The researcher and an independent academic with experience in qualitative research independently listened to all recordings of the interviews and read the transcripts several times. Qualitative data results indicate that even though student-athletes reported a slightly higher student identity, there are parallels between sports and academic structures on college campuses. Results also provide evidence of lack of academic support for student-athletes. It is therefore crucial for student-athletes to have access to academic support services (e.g., tutoring, flexible study times, and reduced academic loads) to meet their academic needs. Coaches and lecturers play a fundamental role in sporting student-athletes. Coaches and professors’ academic efficacy on student-athletes enhances student-athletes’ academic confidence. Results are discussed within the stereotype threat theory.

Keywords: athletic identity, colligiate sport, sterotype threat, student athletes

Procedia PDF Downloads 462
93 Examining Smallholder Farmers’ Perceptions of Climate Change and Barriers to Strategic Adaptation in Todee District, Liberia

Authors: Joe Dorbor Wuokolo

Abstract:

Thousands of smallholder farmers in Todee District, Montserrado county, are currently vulnerable to the negative impact of climate change. The district, which is the agricultural hot spot for the county, is faced with unfavorable changes in the daily temperature due to climate change. Farmers in the district have observed a dramatic change in the ratio of rainfall to sunshine, which has caused a chilling effect on their crop yields. However, there is a lack of documentation regarding how farmers perceive and respond to these changes and challenges. A study was conducted in the region to examine the perceptions of smallholder farmers regarding the negative impact of climate change, the adaptation strategies practice, and the barriers that hinder the process of advancing adaptation strategy. On purpose, a sample of 41 respondents from five towns was selected, including five town chiefs, five youth leaders, five women leaders, and sixteen community members. Women and youth leaders were specifically chosen to provide gender balance and enhance the quality of the investigation. Additionally, to validate the barriers farmers face during adaptation to climate change, this study interviewed eight experts from local and international organizations and government ministries and agencies involved in climate change and agricultural programs on what they perceived as the major barrier in both local and national level that impede farmers adaptation to climate change impact. SPSS was used to code the data, and descriptive statistics were used to analyze the data. The weighted average index (WAI) was used to rank adaptation strategies and the perceived importance of adaptation practices among farmers. On a scale from 0 to 3, 0 indicates the least important technique, and 3 indicates the most effective technique. In addition, the Problem Confrontation Index (PCI) was used to rank the barriers that prevented farmers from implementing adaptation measures. According to the findings, approximately 60% of all respondents considered the use of irrigation systems to be the most effective adaptation strategy, with drought-resistant varieties making up 30% of the total. Additionally, 80% of respondents placed a high value on drought-resistant varieties, while 63% percent placed it on irrigation practices. In addition, 78% of farmers ranked and indicated that unpredictability of the weather is the most significant barrier to their adaptation strategies, followed by the high cost of farm inputs and lack of access to financing facilities. 80% of respondents believe that the long-term changes in precipitation (rainfall) and temperature (hotness) are accelerating. This suggests that decision-makers should adopt policies and increase the capacity of smallholder farmers to adapt to the negative impact of climate change in order to ensure sustainable food production.

Keywords: adaptation strategies, climate change, farmers’ perception, smallholder farmers

Procedia PDF Downloads 82
92 Magnetofluidics for Mass Transfer and Mixing Enhancement in a Micro Scale Device

Authors: Majid Hejazian, Nam-Trung Nguyen

Abstract:

Over the past few years, microfluidic devices have generated significant attention from industry and academia due to advantages such as small sample volume, low cost and high efficiency. Microfluidic devices have applications in chemical, biological and industry analysis and can facilitate assay of bio-materials and chemical reactions, separation, and sensing. Micromixers are one of the important microfluidic concepts. Micromixers can work as stand-alone devices or be integrated in a more complex microfluidic system such as a lab on a chip (LOC). Micromixers are categorized as passive and active types. Passive micromixers rely only on the arrangement of the phases to be mixed and contain no moving parts and require no energy. Active micromixers require external fields such as pressure, temperature, electric and acoustic fields. Rapid and efficient mixing is important for many applications such as biological, chemical and biochemical analysis. Achieving fast and homogenous mixing of multiple samples in the microfluidic devices has been studied and discussed in the literature recently. Improvement in mixing rely on effective mass transport in microscale, but are currently limited to molecular diffusion due to the predominant laminar flow in this size scale. Using magnetic field to elevate mass transport is an effective solution for mixing enhancement in microfluidics. The use of a non-uniform magnetic field to improve mass transfer performance in a microfluidic device is demonstrated in this work. The phenomenon of mixing ferrofluid and DI-water streams has been reported before, but mass transfer enhancement for other non-magnetic species through magnetic field have not been studied and evaluated extensively. In the present work, permanent magnets were used in a simple microfluidic device to create a non-uniform magnetic field. Two streams are introduced into the microchannel: one contains fluorescent dye mixed with diluted ferrofluid to induce enhanced mass transport of the dye, and the other one is a non-magnetic DI-water stream. Mass transport enhancement of fluorescent dye is evaluated using fluorescent measurement techniques. The concentration field is measured for different flow rates. Due to effect of magnetic field, a body force is exerted on the paramagnetic stream and expands the ferrofluid stream into non-magnetic DI-water flow. The experimental results demonstrate that without a magnetic field, both magnetic nanoparticles of the ferrofluid and the fluorescent dye solely rely on molecular diffusion to spread. The non-uniform magnetic field, created by the permanent magnets around the microchannel, and diluted ferrofluid can improve mass transport of non-magnetic solutes in a microfluidic device. The susceptibility mismatch between the fluids results in a magnetoconvective secondary flow towards the magnets and subsequently the mass transport of the non-magnetic fluorescent dye. A significant enhancement in mass transport of the fluorescent dye was observed. The platform presented here could be used as a microfluidics-based micromixer for chemical and biological applications.

Keywords: ferrofluid, mass transfer, micromixer, microfluidics, magnetic

Procedia PDF Downloads 225
91 Liquefaction Phenomenon in the Kathmandu Valley during the 2015 Earthquake of Nepal

Authors: Kalpana Adhikari, Mandip Subedi, Keshab Sharma, Indra P. Acharya

Abstract:

The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 struck the central region of Nepal on April 25, 2015 with the epicenter about 77 km northwest of Kathmandu Valley . Peak ground acceleration observed during the earthquake was 0.18g. This motion induced several geotechnical effects such as landslides, foundation failures liquefaction, lateral spreading and settlement, and local amplification. An aftershock of moment magnitude (Mw) 7.3 hit northeast of Kathmandu on May 12 after 17 days of main shock caused additional damages. Kathmandu is the largest city in Nepal, have a population over four million. As the Kathmandu Valley deposits are composed mainly of sand, silt and clay layers with a shallow ground water table, liquefaction is highly anticipated. Extensive liquefaction was also observed in Kathmandu Valley during the 1934 Nepal-Bihar earthquake. Field investigations were carried out in Kathmandu Valley immediately after Mw 7.8, April 25 main shock and Mw 7.3, May 12 aftershock. Geotechnical investigation of both liquefied and non-liquefied sites were conducted after the earthquake. This paper presents observations of liquefaction and liquefaction induced damage, and the liquefaction potential assessment based on Standard Penetration Tests (SPT) for liquefied and non-liquefied sites. SPT based semi-empirical approach has been used for evaluating liquefaction potential of the soil and Liquefaction Potential Index (LPI) has been used to determine liquefaction probability. Recorded ground motions from the event are presented. Geological aspect of Kathmandu Valley and local site effect on the occurrence of liquefaction is described briefly. Observed liquefaction case studies are described briefly. Typically, these are sand boils formed by freshly ejected sand forced out of over-pressurized sub-strata. At most site, sand was ejected to agricultural fields forming deposits that varied from millimetres to a few centimeters thick. Liquefaction-induced damage to structures in these areas was not significant except buildings on some places tilted slightly. Boiled soils at liquefied sites were collected and the particle size distributions of ejected soils were analyzed. SPT blow counts and the soil profiles at ten liquefied and non-liquefied sites were obtained. The factors of safety against liquefaction with depth and liquefaction potential index of the ten sites were estimated and compared with observed liquefaction after 2015 Gorkha earthquake. The liquefaction potential indices obtained from the analysis were found to be consistent with the field observation. The field observations along with results from liquefaction assessment were compared with the existing liquefaction hazard map. It was found that the existing hazard maps are unrepresentative and underestimate the liquefaction susceptibility in Kathmandu Valley. The lessons learned from the liquefaction during this earthquake are also summarized in this paper. Some recommendations are also made to the seismic liquefaction mitigation in the Kathmandu Valley.

Keywords: factor of safety, geotechnical investigation, liquefaction, Nepal earthquake

Procedia PDF Downloads 323
90 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 127
89 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 96
88 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors

Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev

Abstract:

Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.

Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis

Procedia PDF Downloads 107
87 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 256
86 Relationship between Gully Development and Characteristics of Drainage Area in Semi-Arid Region, NW Iran

Authors: Ali Reza Vaezi, Ouldouz Bakhshi Rad

Abstract:

Gully erosion is a widespread and often dramatic form of soil erosion caused by water during and immediately after heavy rainfall. It occurs when flowing surface water is channelled across unprotected land and washes away the soil along the drainage lines. The formation of gully is influenced by various factors, including climate, drainage surface area, slope gradient, vegetation cover, land use, and soil properties. It is a very important problem in semi-arid regions, where soils have lower organic matter and are weakly aggregated. Intensive agriculture and tillage along the slope can accelerate soil erosion by water in the region. There is little information on the development of gully erosion in agricultural rainfed areas. Therefore, this study was carried out to investigate the relationship between gully erosion and morphometric characteristics of the drainage area and the effects of soil properties and soil management factors (land use and tillage method) on gully development. A field study was done in a 900 km2 agricultural area in Hshtroud township located in the south of East Azarbijan province, NW Iran. Toward this, two hundred twenty-two gullies created in rainfed lands were found in the area. Some properties of gullies, consisting of length, width, depth, height difference, cross section area, and volume, were determined. Drainage areas for each or some gullies were determined, and their boundaries were drawn. Additionally, the surface area of each drainage, land use, tillage direction, and soil properties that may affect gully formation were determined. The soil erodibility factor (K) defined in the Universal Soil Loss Equation (USLE) was estimated based on five soil properties (silt and very fine sand, coarse sand, organic matter, soil structure code, and soil permeability). Gully development in each drainage area was quantified using its volume and soil loss. The dependency of gully development on drainage area characteristics (surface area, land use, tillage direction, and soil properties) was determined using correlation matrix analysis. Based on the results, gully length was the most important morphometric characteristic indicating the development of gully erosion in the lands. Gully development in the area was related to slope gradient (r= -0.26), surface area (r= 0.71), the area of rainfed lands (r= 0.23), and the area of rainfed tilled along the slope (r= 0.24). Nevertheless, its correlation with the area of pasture and soil erodibility factor (K) was not significant. Among the characteristics of drainage area, surface area is the major factor controlling gully volume in the agricultural land. No significant correlation was found between gully erosion and soil erodibility factor (K) estimated by the Universal Soil Loss Equation (USLE). It seems the estimated soil erodibility can’t describe the susceptibility of the study soils to the gully erosion process. In these soils, aggregate stability and soil permeability are the two soil physical properties that affect the actual soil erodibility and in consequence, these soil properties can control gully erosion in the rainfed lands.

Keywords: agricultural area, gully properties, soil structure, USLE

Procedia PDF Downloads 77
85 Antimicrobial Efficacy of Some Antibiotics Combinations Tested against Some Molecular Characterized Multiresistant Staphylococcus Clinical Isolates, in Egypt

Authors: Nourhan Hussein Fanaki, Hoda Mohamed Gamal El-Din Omar, Nihal Kadry Moussa, Eva Adel Edward Farid

Abstract:

The resistance of staphylococci to various antibiotics has become a major concern for health care professionals. The efficacy of the combinations of selected glycopeptides (vancomycin and teicoplanin) with gentamicin or rifampicin, as well as that of gentamicin/rifampicin combination, was studied against selected pathogenic staphylococcus isolated from Egypt. The molecular distribution of genes conferring resistance to these four antibiotics was detected among tested clinical isolates. Antibiotic combinations were studied using the checkerboard technique and the time-kill assay (in both the stationary and log phases). Induction of resistance to glycopeptides in staphylococci was tried in the absence and presence of diclofenac sodium as inducer. Transmission electron microscopy was used to study the effect of glycopeptides on the ultrastructure of the cell wall of staphylococci. Attempts were made to cure gentamicin resistance plasmids and to study the transfer of these plasmids by conjugation. Trials for the transformation of the successfully isolated gentamicin resistance plasmid to competent cells were carried out. The detection of genes conferring resistance to the tested antibiotics was performed using the polymerase chain reaction. The studied antibiotic combinations proved their efficacy, especially when tested during the log phase. Induction of resistance to glycopeptides in staphylococci was more promising in presence of diclofenac sodium, compared to its absence. Transmission electron microscopy revealed the thickening of bacterial cell wall in staphylococcus clinical isolates due to the presence of tested glycopeptides. Curing of gentamicin resistance plasmids was only successful in 2 out of 9 tested isolates, with a curing rate of 1 percent for each. Both isolates, when used as donors in conjugation experiments, yielded promising conjugation frequencies ranging between 5.4 X 10-2 and 7.48 X 10-2 colony forming unit/donor cells. Plasmid isolation was only successful in one out of the two tested isolates. However, low transformation efficiency (59.7 transformants/microgram plasmid DNA) of such plasmids was obtained. Negative regulators of autolysis, such as arlR, lytR and lrgB, as well as cell-wall associated genes, such as pbp4 and/or pbp2, were detected in staphylococcus isolates with reduced susceptibility to the tested glycopeptides. Concerning rifampicin resistance genes, rpoBstaph was detected in 75 percent of the tested staphylococcus isolates. It could be concluded that in vitro studies emphasized the usefulness of the combination of vancomycin or teicoplanin with gentamicin or rifampicin, as well as that of gentamicin with rifampicin, against staphylococci showing varying resistance patterns. However, further in vivo studies are required to ensure the safety and efficacy of such combinations. Diclofenac sodium can act as an inducer of resistance to glycopeptides in staphylococci. Cell-wall thickness is a major contributor to such resistance among them. Gentamicin resistance in these strains could be chromosomally or plasmid mediated. Multiple mutations in the rpoB gene could mediate staphylococcus resistance to rifampicin.

Keywords: glycopeptides, combinations, induction, diclofenac, transmission electron microscopy, polymerase chain reaction

Procedia PDF Downloads 292
84 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 99
83 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis

Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov

Abstract:

Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.

Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil

Procedia PDF Downloads 114
82 Management of Mycotoxin Production and Fungicide Resistance by Targeting Stress Response System in Fungal Pathogens

Authors: Jong H. Kim, Kathleen L. Chan, Luisa W. Cheng

Abstract:

Control of fungal pathogens, such as foodborne mycotoxin producers, is problematic as effective antimycotic agents are often very limited. Mycotoxin contamination significantly interferes with the safe production of foods or crops worldwide. Moreover, expansion of fungal resistance to commercial drugs or fungicides is a global human health concern. Therefore, there is a persistent need to enhance the efficacy of commercial antimycotic agents or to develop new intervention strategies. Disruption of the cellular antioxidant system should be an effective method for pathogen control. Such disruption can be achieved with safe, redox-active compounds. Natural phenolic derivatives are potent redox cyclers that inhibit fungal growth through destabilization of the cellular antioxidant system. The goal of this study is to identify novel, redox-active compounds that disrupt the fungal antioxidant system. The identified compounds could also function as sensitizing agents to conventional antimycotics (i.e., chemosensitization) to improve antifungal efficacy. Various benzo derivatives were tested against fungal pathogens. Gene deletion mutants of the yeast Saccharomyces cerevisiae were used as model systems for identifying molecular targets of benzo analogs. The efficacy of identified compounds as potent antifungal agents or as chemosensitizing agents to commercial drugs or fungicides was examined with methods outlined by the Clinical Laboratory Standards Institute or the European Committee on Antimicrobial Susceptibility Testing. Selected benzo derivatives possessed potent antifungal or antimycotoxigenic activity. Molecular analyses by using S. cerevisiae mutants indicated antifungal activity of benzo derivatives was through disruption of cellular antioxidant or cell wall integrity system. Certain benzo analogs screened overcame tolerance of Aspergillus signaling mutants, namely mitogen-activated protein kinase mutants, to fludioxonil fungicide. Synergistic antifungal chemosensitization greatly lowered minimum inhibitory or fungicidal concentrations of test compounds, including inhibitors of mitochondrial respiration. Of note, salicylaldehyde is a potent antimycotic volatile that has some practical application as a fumigant. Altogether, benzo derivatives targeting cellular antioxidant system of fungi (along with cell wall integrity system) effectively suppress fungal growth. Candidate compounds possess the antifungal, antimycotoxigenic or chemosensitizing capacity to augment the efficacy of commercial antifungals. Therefore, chemogenetic approaches can lead to the development of novel antifungal intervention strategies, which enhance the efficacy of established microbe intervention practices and overcome drug/fungicide resistance. Chemosensitization further reduces costs and alleviates negative side effects associated with current antifungal treatments.

Keywords: antifungals, antioxidant system, benzo derivatives, chemosensitization

Procedia PDF Downloads 262
81 The Impact of Physical Exercise on Gestational Diabetes and Maternal Weight Management: A Meta-Analysis

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Physiological changes during pregnancy, such as alterations in the circulatory, respiratory, and musculoskeletal systems, can negatively impact daily physical activity. This reduced activity is often associated with an increased risk of adverse maternal health outcomes, particularly gestational diabetes mellitus (GDM) and excessive weight gain. This meta-analysis aims to evaluate the effectiveness of structured physical exercise interventions during pregnancy in reducing the risk of GDM and managing maternal weight gain. A comprehensive search was conducted across six major databases: PubMed, Cochrane Library, EMBASE, Web of Science, ScienceDirect, and ClinicalTrials.gov, covering the period from database inception until 2023. Randomized controlled trials (RCTs) that explored the effects of physical exercise programs on pregnant women with low physical activity levels were included. The search was performed using EndNote and results were managed using RevMan (Review Manager) for meta-analysis. RCTs involving healthy pregnant women with low levels of physical activity or sedentary lifestyles were selected. These RCTs must have incorporated structured exercise programs during pregnancy and reported on outcomes related to GDM and maternal weight gain. From an initial pool of 5,112 articles, 65 RCTs (involving 11,400 pregnant women) met the inclusion criteria. Data extraction was performed, followed by a quality assessment of the selected studies using the Cochrane Risk of Bias tool. The meta-analysis was conducted using RevMan software, where pooled relative risks (RR) and weighted mean differences (WMD) were calculated using a random-effects model to address heterogeneity across studies. Sensitivity analyses, subgroup analyses (based on factors such as exercise intensity, duration, and pregnancy stage), and publication bias assessments were also conducted. Structured physical exercise during pregnancy led to a significant reduction in the risk of developing GDM (RR = 0.68; P < 0.001), particularly when the exercise program was performed throughout the pregnancy (RR = 0.62; P = 0.035). In addition, maternal weight gain was significantly reduced (WMD = −1.18 kg; 95% CI −1.54 to −0.85; P < 0.001). There were no significant adverse effects reported for either the mother or the neonate, confirming that exercise interventions are safe for both. This meta-analysis highlights the positive impact of regular moderate physical activity during pregnancy in reducing the risk of GDM and managing maternal weight gain. These findings suggest that physical exercise should be encouraged as a routine part of prenatal care. However, more research is required to refine exercise recommendations and determine the most effective interventions based on individual risk factors and pregnancy stages.

Keywords: gestational diabetes, maternal weight management, meta-analysis, randomized controlled trials

Procedia PDF Downloads 10
80 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy

Authors: Lucia Liu, Matthew Koziol

Abstract:

Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.

Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation

Procedia PDF Downloads 54
79 Evolution of Antimicrobial Resistance in Shigella since the Turn of 21st Century, India

Authors: Neelam Taneja, Abhishek Mewara, Ajay Kumar

Abstract:

Multidrug resistant shigellae have emerged as a therapeutic challenge in India. At our 2000 bed tertiary care referral centre in Chandigarh, North India, which caters to a large population of 7 neighboring states, antibiotic resistance in Shigella is being constantly monitored. Shigellae are isolated from 3 to 5% of all stool samples. In 1990 nalidixic acid was the drug of choice as 82%, and 63% of shigellae were resistant to ampicillin and cotrimoxazole respectively. Nalidixic acid resistance emerged in 1992 and rapidly increased from 6% during 1994-98 to 86% by the turn of 21st century. In the 1990s, the WHO recommended ciprofloxacin as the drug of choice for empiric treatment of shigellosis in view of the existing high level resistance to agents like chloramphenicol, ampicillin, cotrimoxazole and nalidixic acid. First resistance to ciprofloxacin in S. flexneri at our centre appeared in 2000 and rapidly rose to 46% in 2007 (MIC>4mg/L). In between we had an outbreak of ciprofloxacin resistant S.dysenteriae serotype 1 in 2003. Therapeutic failures with ciprofloxacin occurred with both ciprofloxacin-resistant S. dysenteriae and ciprofloxacin-resistant S. flexneri. The severity of illness was more with ciprofloxacin-resistant strains. Till 2000, elsewhere in the world ciprofloxacin resistance in S. flexneri was sporadic and uncommon, though resistance to co-trimoxazole and ampicillin was common and in some areas resistance to nalidixic acid had also emerged. Fluoroquinolones due to extensive use and misuse for many other illnesses in our region are thus no longer the preferred group of drugs for managing shigellosis in India. WHO presently recommends ceftriaxone and azithromycin as alternative drugs to fluoroquinolone-resistant shigellae, however, overreliance on this group of drugs also seems to soon become questionable considering the emerging cephalosporin-resistant shigellae. We found 15.1% of S. flexneri isolates collected over a period of 9 years (2000-2009) resistant to at least one of the third-generation cephalosporins (ceftriaxone/cefotaxime). The first isolate showing ceftriaxone resistance was obtained in 2001, and we have observed an increase in number of isolates resistant to third generation cephalosporins in S. flexneri 2005 onwards. This situation has now become a therapeutic challenge in our region. The MIC values for Shigella isolates revealed a worrisome rise for ceftriaxone (MIC90:12 mg/L) and cefepime (MIC90:8 mg/L). MIC values for S. dysenteriae remained below 1 mg/L for ceftriaxone, however for cefepime, the MIC90 has raised to 4 mg/L. These infections caused by ceftriaxone-resistant S. flexneri isolates were successfully treated by azithromycin at our center. Most worrisome development in the present has been the emergence of DSA(Decreased susceptibility to azithromycin) which surfaced in 2001 and has increased from 4.3% till 2011 to 34% thereafter. We suspect plasmid-mediated resistance as we detected qnrS1-positive Shigella for the first time from the Indian subcontinent in 2 strains from 2010, indicating a relatively new appearance of this PMQR determinant among Shigella in India. This calls for a continuous and strong surveillance of antibiotic resistance across the country. The prevention of shigellosis by developing cost-effective vaccines is desirable as it will substantially reduce the morbidity associated with diarrhoea in the country

Keywords: Shigella, antimicrobial, resistance, India

Procedia PDF Downloads 229
78 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 51
77 Entrepreneurial Dynamism and Socio-Cultural Context

Authors: Shailaja Thakur

Abstract:

Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.

Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs

Procedia PDF Downloads 257
76 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 129
75 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 342
74 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 114
73 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 57
72 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 65