Search results for: rapid detection
843 Peptide-Gold Nanocluster as an Optical Biosensor for Glycoconjugate Secreted from Leishmania
Authors: Y. A. Prada, Fanny Guzman, Rafael Cabanzo, John J. Castillo, Enrique Mejia-Ospino
Abstract:
In this work, we show the important results about of synthesis of photoluminiscents gold nanoclusters using a small peptide as template for biosensing applications. Interestingly, we design one peptide (NBC2854) homologue to conservative domain from 215 250 residue of a galactolectin protein which can recognize the proteophosphoglycans (PPG) from Leishmania. Peptide was synthetized by multiple solid phase synthesis using FMoc group methodology in acid medium. Finally, the peptide was purified by High-Performance Liquid Chromatography using a Vydac C-18 preparative column and the detection was at 215 nm using a Photo Diode Array detector. Molecular mass of this peptide was confirmed by MALDI-TOF and to verify the α-helix structure we use Circular Dichroism. By means of the methodology used we obtained a novel fluorescents gold nanoclusters (AuNC) using NBC2854 as a template. In this work, we described an easy and fast microsonic method for the synthesis of AuNC with ≈ 3.0 nm of hydrodynamic size and photoemission at 630 nm. The presence of cysteine residue in the C-terminal of the peptide allows the formation of Au-S bond which confers stability to Peptide-based gold nanoclusters. Interactions between the peptide and gold nanoclusters were confirmed by X-ray Photoemission and Raman Spectroscopy. Notably, from the ultrafine spectra shown in the MALDI-TOF analysis which containing only 3-7 KDa species was assigned to Au₈-₁₈[NBC2854]₂ clusters. Finally, we evaluated the Peptide-gold nanocluster as an optical biosensor based on fluorescence spectroscopy and the fluorescence signal of PPG (0.1 µg-mL⁻¹ to 1000 µg-mL⁻¹) was amplified at the same wavelength emission (≈ 630 nm). This can suggest that there is a strong interaction between PPG and Pep@AuNC, therefore, the increase of the fluorescence intensity can be related to the association mechanism that take place when the target molecule is sensing by the Pep@AuNC conjugate. Further spectroscopic studies are necessary to evaluate the fluorescence mechanism involve in the sensing of the PPG by the Pep@AuNC. To our best knowledge the fabrication of an optical biosensor based on Pep@AuNC for sensing biomolecules such as Proteophosphoglycans which are secreted in abundance by parasites Leishmania.Keywords: biosensing, fluorescence, Leishmania, peptide-gold nanoclusters, proteophosphoglycans
Procedia PDF Downloads 169842 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study
Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta
Abstract:
Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time
Procedia PDF Downloads 47841 Reduce the Environmental Impacts of the Intensive Use of Glass in New Buildings in Khartoum, Sudan
Authors: Sawsan Domi
Abstract:
Khartoum is considering as one of the hottest cities all over the world, the mean monthly outdoor temperature remains above 30 ºC. Solar Radiation on Building Surfaces considered within the world highest values. Buildings in Khartoum is receiving huge amounts of watts/m2. Northern, eastern and western facades always receive a greater amount than the south ones. Therefore, these facades of the building must be better protected than the others. One of the most important design limits affecting indoor thermal comfort and energy conservation are building envelope design, self-efficiency in building materials and optical and thermo-physical properties of the building envelope. A small sun-facing glazing area is very important to provide thermal comfort in hot dry climates because of the intensive sunshine. This study aims to propose a work plan to help minimize the negative environmental effect of the climate on buildings taking the intensive use of glazing. In the last 15 years, there was a rapid growth in building sector in Khartoum followed by many of wrong strategies getting away of being environmental friendly. The intensive use of glazing on facades increased to commercial, industrial and design aspects, while the glass envelope led to quick increase in temperature by the reflection affects the sun on faces, cars and bodies. Logically, being transparent by using glass give a sense of open spaces, allowing natural lighting and sometimes natural ventilation keeping dust and insects away. In the other hand, it costs more and give more overheated. And this is unsuitable for a hot dry climate city like Khartoum. Many huge projects permitted every year from the Ministry of Planning in Khartoum state, with a design based on the intensive use of glazing on facades. There are no Laws or Regulations to control using materials in construction, the last building code -building code 2008- Khartoum state- only focused in using sustainable materials with no consider to any environmental aspects. Results of the study will help increase the awareness for architects, engineers and public about this environmentally problem. Objectives vary between Improve energy performance in buildings and Provide high levels of thermal comfort in the inner environment. As a future project, what are the changes that can happen in building permits codes and regulations. There could be recommendations for the governmental sector such as Obliging the responsible authorities to version environmental friendly laws in building construction fields and Support Renewable energy sector in buildings.Keywords: building envelope, building regulations, glazed facades, solar radiation
Procedia PDF Downloads 219840 Investigation of Clusters of MRSA Cases in a Hospital in Western Kenya
Authors: Lillian Musila, Valerie Oundo, Daniel Erwin, Willie Sang
Abstract:
Staphylococcus aureus infections are a major cause of nosocomial infections in Kenya. Methicillin resistant S. aureus (MRSA) infections are a significant burden to public health and are associated with considerable morbidity and mortality. At a hospital in Western Kenya two clusters of MRSA cases emerged within short periods of time. In this study we explored whether these clusters represented a nosocomial outbreak by characterizing the isolates using phenotypic and molecular assays and examining epidemiological data to identify possible transmission patterns. Specimens from the site of infection of the subjects were collected, cultured and S. aureus isolates identified phenotypically and confirmed by APIStaph™. MRSA were identified by cefoxitin disk screening per CLSI guidelines. MRSA were further characterized based on their antibiotic susceptibility patterns and spa gene typing. Characteristics of cases with MRSA isolates were compared with those with MSSA isolated around the same time period. Two cases of MRSA infection were identified in the two week period between 21 April and 4 May 2015. A further 2 MRSA isolates were identified on the same day on 7 September 2015. The antibiotic resistance patterns of the two MRSA isolates in the 1st cluster of cases were different suggesting that these were distinct isolates. One isolate had spa type t2029 and the other had a novel spa type. The 2 isolates were obtained from urine and an open skin wound. In the 2nd cluster of MRSA isolates, the antibiotic susceptibility patterns were similar but isolates had different spa types: one was t037 and the other a novel spa type different from the novel MRSA spa type in the first cluster. Both cases in the second cluster were admitted into the hospital but one infection was community- and the other hospital-acquired. Only one of the four MRSA cases was classified as an HAI from an infection acquired post-operatively. When compared to other S. aureus strains isolated within the same time period from the same hospital only one spa type t2029 was found in both MRSA and non-MRSA strains. None of the cases infected with MRSA in the two clusters shared any common epidemiological characteristic such as age, sex or known risk factors for MRSA such as prolonged hospitalization or institutionalization. These data suggest that the observed MRSA clusters were multi strain clusters and not an outbreak of a single strain. There was no clear relationship between the isolates by spa type suggesting that no transmission was occurring within the hospital between these cluster cases but rather that the majority of the MRSA strains were circulating in the community. There was high diversity of spa types among the MRSA strains with none of the isolates sharing spa types. Identification of disease clusters in space and time is critical for immediate infection control action and patient management. Spa gene typing is a rapid way of confirming or ruling out MRSA outbreaks so that costly interventions are applied only when necessary.Keywords: cluster, Kenya, MRSA, spa typing
Procedia PDF Downloads 330839 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 313838 Dengue Virus Infection Rate in Mosquitoes Collected in Thailand Related to Environmental Factors
Authors: Chanya Jetsukontorn
Abstract:
Dengue hemorrhagic fever is the most important Mosquito-borne disease and the major public health problem in Thailand. The most important vector is Aedes aegypti. Environmental factors such as temperature, relative humidity, and biting rate affect dengue virus infection. The most effective measure for prevention is controlling of vector mosquitoes. In addition, surveillance of field-caught mosquitoes is imperative for determining the natural vector and can provide an early warning sign at risk of transmission in an area. In this study, Aedes aegypti mosquitoes were collected in Amphur Muang, Phetchabun Province, Thailand. The mosquitoes were collected in the rainy season and the dry season both indoor and outdoor. During mosquito’s collection, the data of environmental factors such as temperature, humidity and breeding sites were observed and recorded. After identified to species, mosquitoes were pooled according to genus/species, and sampling location. Pools consisted of a maximum of 10 Aedes mosquitoes. 70 pools of 675 Aedes aegypti were screened with RT-PCR for flaviviruses. To confirm individual infection for determining True infection rate, individual mosquitoes which gave positive results of flavivirus detection were tested for dengue virus by RT-PCR. The infection rate was 5.93% (4 positive individuals from 675 mosquitoes). The probability to detect dengue virus in mosquitoes at the neighbour’s houses was 1.25 times, especially where distances between neighboring houses and patient’s houses were less than 50 meters. The relative humidity in dengue-infected villages with dengue-infected mosquitoes was significantly higher than villages that free from dengue-infected mosquitoes. Indoor biting rate of Aedes aegypti was 14.87 times higher than outdoor, and biting times of 09.00-10.00, 10.00-11.00, 11.00-12.00 yielded 1.77, 1.46, 0.68mosquitoes/man-hour, respectively. These findings confirm environmental factors were related to Dengue infection in Thailand. Data obtained from this study will be useful for the prevention and control of the diseases.Keywords: Aedes aegypti, Dengue virus, environmental factors, one health, PCR
Procedia PDF Downloads 145837 Volunteered Geographic Information Coupled with Wildfire Fire Progression Maps: A Spatial and Temporal Tool for Incident Storytelling
Authors: Cassandra Hansen, Paul Doherty, Chris Ferner, German Whitley, Holly Torpey
Abstract:
Wildfire is a natural and inevitable occurrence, yet changing climatic conditions have increased the severity, frequency, and risk to human populations in the wildland/urban interface (WUI) of the Western United States. Rapid dissemination of accurate wildfire information is critical to both the Incident Management Team (IMT) and the affected community. With the advent of increasingly sophisticated information systems, GIS can now be used as a web platform for sharing geographic information in new and innovative ways, such as virtual story map applications. Crowdsourced information can be extraordinarily useful when coupled with authoritative information. Information abounds in the form of social media, emergency alerts, radio, and news outlets, yet many of these resources lack a spatial component when first distributed. In this study, we describe how twenty-eight volunteer GIS professionals across nine Geographic Area Coordination Centers (GACC) sourced, curated, and distributed Volunteered Geographic Information (VGI) from authoritative social media accounts focused on disseminating information about wildfires and public safety. The combination of fire progression maps with VGI incident information helps answer three critical questions about an incident, such as: where the first started. How and why the fire behaved in an extreme manner and how we can learn from the fire incident's story to respond and prepare for future fires in this area. By adding a spatial component to that shared information, this team has been able to visualize shared information about wildfire starts in an interactive map that answers three critical questions in a more intuitive way. Additionally, long-term social and technical impacts on communities are examined in relation to situational awareness of the disaster through map layers and agency links, the number of views in a particular region of a disaster, community involvement and sharing of this critical resource. Combined with a GIS platform and disaster VGI applications, this workflow and information become invaluable to communities within the WUI and bring spatial awareness for disaster preparedness, response, mitigation, and recovery. This study highlights progression maps as the ultimate storytelling mechanism through incident case studies and demonstrates the impact of VGI and sophisticated applied cartographic methodology make this an indispensable resource for authoritative information sharing.Keywords: storytelling, wildfire progression maps, volunteered geographic information, spatial and temporal
Procedia PDF Downloads 176836 Hierarchical Zeolites as Potential Carriers of Curcumin
Authors: Ewelina Musielak, Agnieszka Feliczak-Guzik, Izabela Nowak
Abstract:
Based on the latest data, it is expected that the substances of therapeutic interest used will be as natural as possible. Therefore, active substances with the highest possible efficacy and low toxicity are sought. Among natural substances with therapeutic effects, those of plant origin stand out. Curcumin isolated from the Curcuma longa plant has proven to be particularly important from a medical point of view. Due to its ability to regulate many important transcription factors, cytokines, and protein kinases, curcumin has found use as an anti-inflammatory, antioxidant, antiproliferative, antiangiogenic, and anticancer agent. The unfavorable properties of curcumin, such as low solubility, poor bioavailability, and rapid degradation under neutral or alkaline pH conditions, limit its clinical application. These problems can be solved by combining curcumin with suitable carriers such as hierarchical zeolites. This is a new class of materials that exhibit several advantages. Hierarchical zeolites used as drug carriers enable delayed release of the active ingredient and promote drug transport to the desired tissues and organs. In addition, hierarchical zeolites play an important role in regulating micronutrient levels in the body and have been used successfully in cancer diagnosis and therapy. To apply curcumin to hierarchical zeolites synthesized from commercial FAU zeolite, solutions containing curcumin, carrier and acetone were prepared. The prepared mixtures were then stirred on a magnetic stirrer for 24 h at room temperature. The curcumin-filled hierarchical zeolites were drained into a glass funnel, where they were washed three times with acetone and distilled water, after which the obtained material was air-dried until completely dry. In addition, the effect of piperine addition to zeolite carrier containing a sufficient amount of curcumin was studied. The resulting products were weighed and the percentage of pure curcumin in the hierarchical zeolite was calculated. All the synthesized materials were characterized by several techniques: elemental analysis, transmission electron microscopy (TEM), Fourier transform infrared spectroscopy, Fourier transform infrared (FT-IR), N2 adsorption, and X-ray diffraction (XRD) and thermogravimetric analysis (TGA). The aim of the presented study was to improve the biological activity of curcumin by applying it to hierarchical zeolites based on FAU zeolite. The results showed that the loading efficiency of curcumin into hierarchical zeolites based on commercial FAU-type zeolite is enhanced by modifying the zeolite carrier itself. The hierarchical zeolites proved to be very good and efficient carriers of plant-derived active ingredients such as curcumin.Keywords: carriers of active substances, curcumin, hierarchical zeolites, incorporation
Procedia PDF Downloads 97835 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 319834 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.Keywords: altitude estimation, drone, image processing, trajectory planning
Procedia PDF Downloads 113833 Clinical Applications of Amide Proton Transfer Magnetic Resonance Imaging: Detection of Brain Tumor Proliferative Activity
Authors: Fumihiro Ima, Shinichi Watanabe, Shingo Maeda, Haruna Imai, Hiroki Niimi
Abstract:
It is important to know growth rate of brain tumors before surgery because it influences treatment planning including not only surgical resection strategy but also adjuvant therapy after surgery. Amide proton transfer (APT) imaging is an emerging molecular magnetic resonance imaging (MRI) technique based on chemical exchange saturation transfer without administration of contrast medium. The underlying assumption in APT imaging of tumors is that there is a close relationship between the proliferative activity of the tumor and mobile protein synthesis. We aimed to evaluate the diagnostic performance of APT imaging of pre-and post-treatment brain tumors. Ten patients with brain tumor underwent conventional and APT-weighted sequences on a 3.0 Tesla MRI before clinical intervention. The maximum and the minimum APT-weighted signals (APTWmax and APTWmin) in each solid tumor region were obtained and compared before and after clinical intervention. All surgical specimens were examined for histopathological diagnosis. Eight of ten patients underwent adjuvant therapy after surgery. Histopathological diagnosis was glioma in 7 patients (WHO grade 2 in 2 patients, WHO grade 3 in 3 patients and WHO grade 4 in 2 patients), meningioma WHO grade1 in 2 patients and primary lymphoma of the brain in 1 patient. High-grade gliomas showed significantly higher APTW-signals than that in low-grade gliomas. APTWmax in one huge parasagittal meningioma infiltrating into the skull bone was higher than that in glioma WHO grade 4. On the other hand, APTWmax in another convexity meningioma was the same as that in glioma WHO grade 3. Diagnosis of primary lymphoma of the brain was possible with APT imaging before pathological confirmation. APTW-signals in residual tumors decreased dramatically within one year after adjuvant therapy in all patients. APT imaging demonstrated excellent diagnostic performance for the planning of surgery and adjuvant therapy of brain tumors.Keywords: amides, magnetic resonance imaging, brain tumors, cell proliferation
Procedia PDF Downloads 139832 The Contribution of Genetic Polymorphisms of Tumor Necrosis Factor Alpha and Vascular Endothelial Growth Factor into the Unfavorable Clinical Course of Ulcerative Colitis
Authors: Y. I. Tretyakova, S. G. Shulkina, T. Y. Kravtsova, A. A. Antipova, N. Y. Kolomeets
Abstract:
The research aimed to assess the functional significance of tumor necrosis factor-alpha (TNF-α) gene polymorphism at the -308G/A (rs1800629) region and vascular endothelial growth factor A (VEGFA) gene polymorphism at the -634G/C (rs 2010963) region in the development of ulcerative colitis (UC), focusing on patients from the Perm region, Russia. We examined 70 UC patients and 50 healthy donors during the active phase of the disease. Our focus was on TNF-α and VEGF concentration in the blood serum, as well as TNF-α and VEGFA gene polymorphisms at the -308G/А and -634G/C regions, respectively. We found that TNF-α and VEGF levels were significantly higher in patients with severe UC and high endoscopic activity compared to those with milder forms of the disease and low endoscopic activity. These tests could serve as additional non-invasive markers for assessing mucosal damage in the large intestine of UC patients. The frequency of allele variations in the TNF-α gene -308G/A (rs1800629) revealed a significantly higher occurrence of the unfavorable homozygote AA in UC patients compared to donors. Additionally, the major allele G and the allele pair GG were more frequent in patients with mild to moderate disease and 1-2 degree of endoscopic activity than in those with severe UC and 3-4 degree of endoscopic activity (χ2=14.19; p=0.000). We also observed a mutant allele A and the unfavorable homozygote AA associated with severe progressive UC. The occurrence of the mutant allele increased the risk of severe UC by 5 times (OR 5.03; CI 12.07-12.21). We did not find any significant differences in the frequency of the CC homozygote (χ2=1.02; p=0.6; OR=1.32) and the mutant allele C of the VEGFA gene -634G/C (rs 2010963) (χ2=0.01; p=0.913; OR=0.97) between groups of UC patients and healthy individuals. However, we detected that the mutant allele C and the unfavorable homozygote CC of the VEGFA gene were associated with more severe endoscopic changes in the colonic mucosa of UC patients (χ2=25,76; р=0,000; OR=0,15). The presence of the mutant allele increased the risk of severe UC by 6 times (OR 6,78; CI 3,13–14,7). We found a direct correlation between TNF-α and VEGFA gene polymorphisms, increased production of the same factors, disease severity, and endoscopic activity (р=0.000). Therefore, the presence of the mutant allele A and homozygote AA of the TNF-α gene at the -308G/A region and the mutant allele C and homozygote CC of the VEGFA gene at the -634G/C region are associated with risks related to an unfavorable clinical course of UC, frequent recurrences, and rapid progression. These findings should be considered when making prognoses regarding the clinical course of the disease and selecting treatment strategies. The presence of the homozygote AA in the TNF-α gene (rs1800629) is considered a sign of genetic predisposition to UC.Keywords: gene polymorphism, TNF-α, ulcerative colitis, VEGF
Procedia PDF Downloads 74831 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry
Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.Keywords: learning, education, imagining, pedagogy, dialogical teaching
Procedia PDF Downloads 92830 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 248829 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 156828 Islam, Gender and Education in Contemporary Georgia: The Example of Kvemo Kartli
Authors: N. Gelovani, D. Ismailov, S. Bochorishvili
Abstract:
Religious minorities of Georgia include Muslims. Their composition is sufficiently miscellaneous, enclosing both ethnical viewpoint and belonging to the inner Islamic denomination. A majority of Muslims represent Azerbaijanis, who chiefly live in Kvemo Kartli (Bolnisi, Gardabani, Dmanisi, Tetri Tskaro, Marneuli and Tsalka). The catalyst for researchers of Islamic History is the geopolitical interests of Georgia, centuries-old contacts with the Islamic world, the not entirely trivial portion of Islam confessor population, the increasing influence of the Islamic factor in current religious-political processes in the world, the elevating procedure of Muslim religious self-consciousness in the Post-Soviet states, significant challenges of international terrorism, and perspectives of rapid globalization. The rise in the level of religious identity of Muslim citizens of Georgia (first of all of those who are not ethnic Georgians) is noticeable. New mosques have been constructed and, sometimes, even young people are being sent to the religious educational institutions of Muslim countries to gain a higher Islamic education. At a time when gender studies are substantive, the goal of which is to eliminate gender-based discrimination and violence in societies, it is essential in Georgia to conduct researches around the concrete problem – Islamic tradition, woman and education in Georgia. A woman’s right to education is an important indicator of women’s general status in a society. The appropriate resources, innovative analysis of Georgian ethnological materials, and surveying of the population (quantitative and qualitative research reports, working papers), condition the success of these researches. In the presented work, interrelation matters of Islam, gender and education in contemporary Georgia by the example of the Azerbaijani population in Kvemo Kartli during period 1992-2016 are studied. We researched the history of Muslim religious education centers in Tbilisi and Kvemo Kartli (Bolnisi, Gardabani, Dmanisi, Tetri Tskaro, Marneuli and Tsalka) in 1992-2016, on the one hand, and the results of sociological interrogation, on the other. As a result of our investigation, we found that Azeri women in the Kvemo Kartli (Georgia) region mostly receive their education in Georgia and Azerbaijan. Educational and Cultural Institutions are inaccessible for most Azeri women. The main reasons are the absence of educational and religious institutions at their places of residence and state policies towards Georgia’s Muslims.Keywords: Islam, gender, Georgia, education
Procedia PDF Downloads 227827 The Link between Corporate Governance and EU Competition Law Enforcement: A Conditional Logistic Regression Analysis of the Role of Diversity, Independence and Corporate Social Responsibility
Authors: Jeroen De Ceuster
Abstract:
This study is the first empirical analysis of the link between corporate governance and European Union competition law. Although competition law enforcement is often studied through the lens of competition law, we offer an alternative perspective by looking at a number of corporate governance factor at the level of the board of directors. We find that undertakings where the Chief Executive Officer is also chairman of the board are twice as likely to violate European Union competition law. No significant relationship was found between European Union competition law infringements and gender diversity of the board, the size of the board, the percentage of directors appointed after the Chief Executive Officer, the percentage of independent directors, or the presence of corporate social responsibility (CSR) committee. This contribution is based on a 1-1 matched peer study. Our sample includes all ultimate parent companies with a board that have been sanctioned by the European Commission for either anticompetitive agreements or abuse of dominance for the period from 2004 to 2018. These companies were matched to a company with headquarters in the same country, belongs to the same industry group, is active in the European Economic Area, and is the nearest neighbor to the infringing company in terms of revenue. Our final sample includes 121 pairs. As is common with matched peer studies, we use CLR to analyze the differences within these pairs. The only statistically significant independent variable after controlling for size and performance is CEO/Chair duality. The results indicate that companies whose Chief Executive Officer also functions as chairman of the board are twice as likely to infringe European Union competition law. This is in line with the monitoring theory of the board of directors, which states that its primary function is to monitor top management. Since competition law infringements are mostly organized by management and hidden from board directors, the results suggest that a Chief Executive Officer who is also chairman is more likely to be either complicit in the infringement or less critical towards his day-to-day colleagues and thus impedes proper detection by the board of competition law infringements.Keywords: corporate governance, competition law, board of directors, board independence, ender diversity, corporate social responisbility
Procedia PDF Downloads 138826 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 134825 The Role of Hypothalamus Mediators in Energy Imbalance
Authors: Maftunakhon Latipova, Feruza Khaydarova
Abstract:
Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.Keywords: obesity, endocrinology, hypothalamus, overweight
Procedia PDF Downloads 76824 Water Crisis or Crisis of Water Management: Assessing Water Governance in Iran
Authors: Sedigheh Kalantari
Abstract:
Like many countries in the arid and semi-arid belt, Iran experiences a natural limitation in the availability of water resources. However, rapid socioeconomic development has created a serious water crisis in a nation that was once one of the world’s pioneers in sustainable water management, due to the Persians’ contribution to hydraulic engineering inventions – the Qanat – throughout history. The exogenous issues like the changing climate, frequent droughts, and international sanctions are only crisis catalyzers, not the main cause of the water crisis; and a resilient water management system is expected to be capable of coping with these periodic external pressures. The current dramatic water security issues in Iran are rooted in managerial, political, and institutional challenges rather than engineering and technical issues, and the country is suffering from challenges in water governance. The country, instead of rigorous water conservation efforts, is still focused on supply-driven approach, technology and centralized methods, and structural solutions that aim to increase water supply; while the effectiveness of water governance and management has often left unused. To solve these issues, it is necessary to assess the present situation and its evolution over time. In this respect, establishing water governance assessment mechanisms will be a significant aspect of this paper. The research framework, however, is a conceptual framework to assess governance performance of Iran to critically diagnose problematic issues and areas, as well as proffer empirically based solutions and determine the best possible steps towards transformational processes. This concept aims to measure the adequacy of current solutions and strategies designed to ameliorate these problems and then develop and prescribe adequate futuristic solutions. Thus, the analytical framework developed in this paper seeks to provide insights on key factors influencing water governance in Iranian cities, institutional frameworks to manage water across scales and authorities, multi-level management gaps and policy responses, through an evidence-based approach and good practices to drive reform toward sustainability and water resource conservation. The findings of this paper show that the current structure of the water governance system in Iran, coupled with the lack of a comprehensive understanding of the root causes of the problem, leaves minimal hope for developing sustainable solutions to Iran’s increasing water crisis. In order to follow sustainable development approaches, Iran needs to replace symptom management with problem prevention.Keywords: governance, Iran, sustainable development, water management, water resources
Procedia PDF Downloads 26823 Enhancing Healthcare Delivery in Low-Income Markets: An Exploration of Wireless Sensor Network Applications
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Healthcare delivery in low-income markets is fraught with numerous challenges, including limited access to essential medical resources, inadequate healthcare infrastructure, and a significant shortage of trained healthcare professionals. These constraints lead to suboptimal health outcomes and a higher incidence of preventable diseases. This paper explores the application of Wireless Sensor Networks (WSNs) as a transformative solution to enhance healthcare delivery in these underserved regions. WSNs, comprising spatially distributed sensor nodes that collect and transmit health-related data, present opportunities to address critical healthcare needs. Leveraging WSN technology facilitates real-time health monitoring and remote diagnostics, enabling continuous patient observation and early detection of medical issues, especially in areas with limited healthcare facilities and professionals. The implementation of WSNs can enhance the overall efficiency of healthcare systems by enabling timely interventions, reducing the strain on healthcare facilities, and optimizing resource allocation. This paper highlights the potential benefits of WSNs in low-income markets, such as cost-effectiveness, increased accessibility, and data-driven decision-making. However, deploying WSNs involves significant challenges, including technical barriers like limited internet connectivity and power supply, alongside concerns about data privacy and security. Moreover, robust infrastructure and adequate training for local healthcare providers are essential for successful implementation. It further examines future directions for WSNs, emphasizing innovation, scalable solutions, and public-private partnerships. By addressing these challenges and harnessing the potential of WSNs, it is possible to revolutionize healthcare delivery and improve health outcomes in low-income markets.Keywords: wireless sensor networks (WSNs), healthcare delivery, low-Income markets, remote patient monitoring, health data security
Procedia PDF Downloads 36822 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants
Authors: Zarina Chokparova, Ighor Uzhinsky
Abstract:
Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture
Procedia PDF Downloads 141821 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body
Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker
Abstract:
This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel
Procedia PDF Downloads 393820 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)
Authors: Tarek Duzan
Abstract:
Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data
Procedia PDF Downloads 93819 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations
Authors: Priyanka Bharti
Abstract:
Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.Keywords: cognition, visual, decision making, graphics, recognition
Procedia PDF Downloads 268818 Morphological Process of Villi Detachment Assessed by Computer-Assisted 3D Reconstruction of Intestinal Crypt from Serial Ultrathin Sections of Rat Duodenum Mucosa
Authors: Lise P. Labéjof, Ivna Mororó, Raquel G. Bastos, Maria Isabel G. Severo, Arno H. de Oliveira
Abstract:
This work presents an alternative mode of intestine mucosa renewal that may allow to better understand the total loss of villi after irradiation. It was tested a morphological method of 3d reconstruction using micrographs of serial sections of rat duodenum. We used hundreds of sections of each specimen of duodenum placed on glass slides and examined under a light microscope. Those containing the detachment, approximately a dozen, were chosen for observation under a transmission electron microscope (TEM). Each of these sections was glued on a block of epon resin and recut into a hundred of 60 nm-thick sections. Ribbons of these ultrathin sections were distributed on a series of copper grids in the same order of appearance than during the process of microstomia. They were then stained by solutions of uranyl and lead salts and observed under a TEM. The sections were pictured and the electron micrographs showing signs of cells detachment were transferred into two softwares, ImageJ to align the cellular structures and Reconstruct to realize the 3d reconstruction. It has been detected epithelial cells that exhibited all signs of programmed cell death and localized at the villus-crypt junction. Their nucleus was irregular in shape with a condensed chromatin in clumps. Their cytoplasm was darker than that of neighboring cells, containing many swollen mitochondria. In some places of the sections, we could see intercellular spaces enlarged by the presence of shrunk cells which displayed a plasma membrane with an irregular shape in thermowell as if the cell interdigitations would distant from each other. The three-dimensional reconstruction of the crypts has allowed observe gradual loss of intercellular contacts of crypt cells in the longitudinal plan of the duodenal mucosa. In the transverse direction, there was a gradual increase of the intercellular space as if these cells moved away from one another. This observation allows assume that the gradual remoteness of the cells at the villus-crypt junction is the beginning of the mucosa detachment. Thus, the shrinking of cells due to apoptosis is the way that they detach from the mucosa and progressively the villi also. These results are in agreement with our initial hypothesis and thus have demonstrated that the villi become detached from the mucosa at the villus-crypt junction by the programmed cell death process. This type of loss of entire villus helps explain the rapid denudation of the intestinal mucosa in case of irradiation.Keywords: 3dr, transmission electron microscopy, ionizing radiations, rat small intestine, apoptosis
Procedia PDF Downloads 378817 Clinical Applications of Amide Proton Transfer Magnetic Resonance Imaging: Detection of Brain Tumor Proliferative Activity
Authors: Fumihiro Imai, Shinichi Watanabe, Shingo Maeda, Haruna Imai, Hiroki Niimi
Abstract:
It is important to know the growth rate of brain tumors before surgery because it influences treatment planning, including not only surgical resection strategy but also adjuvant therapy after surgery. Amide proton transfer (APT) imaging is an emerging molecular magnetic resonance imaging (MRI) technique based on chemical exchange saturation transfer without the administration of a contrast medium. The underlying assumption in APT imaging of tumors is that there is a close relationship between the proliferative activity of the tumor and mobile protein synthesis. We aimed to evaluate the diagnostic performance of APT imaging of pre-and post-treatment brain tumors. Ten patients with brain tumor underwent conventional and APT-weighted sequences on a 3.0 Tesla MRI before clinical intervention. The maximum and the minimum APT-weighted signals (APTWmax and APTWmin) in each solid tumor region were obtained and compared before and after a clinical intervention. All surgical specimens were examined for histopathological diagnosis. Eight of ten patients underwent adjuvant therapy after surgery. Histopathological diagnosis was glioma in 7 patients (WHO grade 2 in 2 patients, WHO grade 3 in 3 patients, and WHO grade 4 in 2 patients), meningioma WHO grade 1 in 2 patients, and primary lymphoma of the brain in 1 patient. High-grade gliomas showed significantly higher APTW signals than that low-grade gliomas. APTWmax in one huge parasagittal meningioma infiltrating into the skull bone was higher than that in glioma WHO grade 4. On the other hand, APTWmax in another convexity meningioma was the same as that in glioma WHO grade 3. Diagnosis of primary lymphoma of the brain was possible with APT imaging before pathological confirmation. APTW signals in residual tumors decreased dramatically within one year after adjuvant therapy in all patients. APT imaging demonstrated excellent diagnostic performance for the planning of surgery and adjuvant therapy of brain tumors.Keywords: amides, magnetic resonance imaging, brain tumors, cell proliferation
Procedia PDF Downloads 86816 Land Use Influence on the 2014 Catastrophic Flood in the Northeast of Peninsular Malaysia
Authors: Zulkifli Yusop
Abstract:
The severity of December 2014 flood on the east coast of Peninsular Malaysia has raised concern over the adequacy of existing land use practices and policies. This article assesses flood responses to selective logging, plantation establishment (oil palm and rubber) and their subsequent management regimes. The hydrological impacts were evaluated on two levels: on-site (mostly in the upstream) and off-site to reflect the cumulative impact at downstream. Results of experimental catchment studies suggest that on-site impact of flood could be kept to a minimum when selecting logging strictly adhere to the existing guidelines. However, increases in flood potential and sedimentation rate were observed with logging intensity and slope steepness. Forest conversion to plantation show the highest impacts. Except on the heavily compacted surfaces, the ground revegetation is usually rapid within two years upon the cessation of the logging operation. The hydrological impacts of plantation opening and replanting could be significantly reduced once the cover crop has fully established which normally takes between three to six months after sowing. However, as oil palms become taller and the canopy gets closer, the cover crop tends to die off due to light competition, and its protecting function gradually diminishes. The exposed soil is further compacted by harvesting machinery which subsequently leads to greater overland flow and erosion rates. As such, the hydrological properties of matured oil palm plantations are generally poorer than in young plantation. In hilly area, the undergrowth in rubber plantation is usually denser compared to under oil palm. The soil under rubber trees is also less compacted as latex collection is done manually. By considering the cumulative effects of land-use over space and time, selective logging seems to pose the least impact on flood potential, followed by planting rubber for latex, oil palm and Latex Timber Clone (LTC). The cumulative hydrological impact of LTC plantation is the most severe because of its shortest replanting rotation (12 to 15 years) compared to oil palm (25 years) and rubber for latex (35 years). Furthermore, the areas gazetted for LTC are mostly located on steeper slopes which are more susceptible to landslide and erosion. Forest has limited capability to store excess rainfall and is only effective in attenuating regular floods. Once the hydrologic storage is exceeded, the excess rainfall will appear as flood water. Therefore, for big floods, rainfall regime has a much bigger influence than land use.Keywords: selective logging, plantation, extreme rainfall, debris flow
Procedia PDF Downloads 347815 Evaluating of Chemical Extractants for Assessment of Bioavailable Heavy Metals in Polluted Soils
Authors: Violina Angelova, Krasimir Ivanov, Stefan Krustev, Dimitar Dimitrov
Abstract:
Availability of a metal is characterised by its quantity transgressing from soil into different extractants or by its content in plants. In literature, the terms 'available forms of compounds' and 'mobile' are often considered as equivalents of the term 'accessible' to plants. Rapid and a sufficiently reliable method for defining the accessible for plants forms turns out to be their extraction through different extractants, imitating the functioning of the root system. As a criterion for the pertinence of the extractant to this purpose usually serves the significant statistic correlation between the extracted quantities of the element from soil and its content in plants. The aim of this work was to evaluate the effectiveness of various extractions (DTPA-TEA, AB-DTPA, Mehlich 3, 0.01 M CaCl₂, 1M NH₄NO₃) for the determination of bioavailability of heavy metals in industrially polluted soils from the metallurgical activity near Plovdiv and Kardjali, Bulgaria. Quantity measurements for contents of heavy metals were performed with ICP-OES. The results showed that extraction capacity was as follows: Mehlich 3>ABDTPA>DTPA-TEA>CaCl₂>NaNO₃. The content of the mobile form of heavy metals depends on the nature of metal ion, the nature of extractant and pH. The obtained results show that CaCl₂ extracts a greater quantity of mobile forms of heavy metals than NH₄NO₃. DTPA-TEA and AB-DTPA are capable of extracting from the soil not only the heavy metals participating in the exchange processes but also the heavy metals bound in carbonates and organic complexes, as well as bound and occluded in oxide and secondary clay minerals. AB-DTPA extracts a bit more heavy metals than DTPA-TEA. The darker color of the solutions obtained with AB-DTPA indicates that considerable quantities organic matter are being destructed. A comparison of the mobile forms of heavy metals extracted from clean and highly polluted soils has revealed that in the polluted soils the greater portion of heavy metals exists in a mobile form. High correlation coefficients are obtained between the metals extracted with different extractants and their total content in soil (r=0.9). A positive correlation between the pH, soil organic matter and the extracted quantities of heavy metals has been found. The results of correlation analysis revealed that the heavy metals extracted by DTPA-TEA, AB-DTPA, Mehlich 3, CaCl₂ and NaNO₃ correlated significantly with plant uptake. Significant correlation was found between DTPA-TEA, AB-DTPA, and CaCl₂ with heavy metals concentration in plants. Application of extracting methods contains chelating agents would be recommended in the future research onthe availabilityof heavy metals in polluted soils.Keywords: availability, chemical extractants, heavy metals, mobile forms
Procedia PDF Downloads 355814 Imbalance on the Croatian Housing Market in the Aftermath of an Economic Crisis
Authors: Tamara Slišković, Tomislav Sekur
Abstract:
This manuscript examines factors that affect demand and supply of the housing market in Croatia. The period from the beginning of this century, until 2008, was characterized by a strong expansion of construction, housing and real estate market in general. Demand for residential units was expanding, and this was supported by favorable lending conditions of banks. Indicators on the supply side, such as the number of newly built houses and the construction volume index were also increasing. Rapid growth of demand, along with the somewhat slower supply growth, led to the situation in which new apartments were sold before the completion of residential buildings. This resulted in a rise of housing price which was indication of a clear link between the housing prices with the supply and demand in the housing market. However, after 2008 general economic conditions in Croatia worsened and demand for housing has fallen dramatically, while supply descended at much slower pace. Given that there is a gap between supply and demand, it can be concluded that the housing market in Croatia is in imbalance. Such trend is accompanied by a relatively small decrease in housing price. The final result of such movements is the large number of unsold housing units at relatively high price levels. For this reason, it can be argued that housing prices are sticky and that, consequently, the price level in the aftermath of a crisis does not correspond to the discrepancy between supply and demand on the Croatian housing market. The degree of rigidity of the housing price can be determined by inclusion of the housing price as the explanatory variable in the housing demand function. Other independent variables are demographic variable (e.g. the number of households), the interest rate on housing loans, households' disposable income and rent. The equilibrium price is reached when the demand for housing equals its supply, and the speed of adjustment of actual prices to equilibrium prices reveals the extent to which the prices are rigid. The latter requires inclusion of the housing prices with time lag as an independent variable in estimating demand function. We also observe the supply side of the housing market, in order to explain to what extent housing prices explain the movement of new construction activity, and other variables that describe the supply. In this context, we test whether new construction on the Croatian market is dependent on current prices or prices with a time lag. Number of dwellings is used to approximate new construction (flow variable), while the housing prices (current or lagged), quantity of dwellings in the previous period (stock variable) and a series of costs related to new construction are independent variables. We conclude that the key reason for the imbalance in the Croatian housing market should be sought in the relative relationship of price elasticities of supply and demand.Keywords: Croatian housing market, economic crisis, housing prices, supply imbalance, demand imbalance
Procedia PDF Downloads 271