Search results for: fluorescence techniques
5710 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4675709 Clinical Trial of VEUPLEXᵀᴹ TBI Assay to Help Diagnose Traumatic Brain Injury by Quantifying Glial Fibrillary Acidic Protein and Ubiquitin Carboxy-Terminal Hydrolase L1 in the Serum of Patients Suspected of Mild TBI by Fluorescence Immunoassay
Authors: Moon Jung Kim, Guil Rhim
Abstract:
The clinical sensitivity of the “VEUPLEXTM TBI assay”, a clinical trial medical device, in mild traumatic brain injury was 28.6% (95% CI, 19.7%-37.5%), and the clinical specificity was 94.0% (95% CI, 89.3%). -98.7%). In addition, when the results analyzed by marker were put together, the sensitivity was higher when interpreting the two tests together than the two tests, UCHL1 and GFAP alone. Additionally, when sensitivity and specificity were analyzed based on CT results for the mild traumatic brain injury patient group, the clinical sensitivity for 2 CT-positive cases was 50.0% (95% CI: 1.3%-98.7%), and 19 CT-negative cases. The clinical specificity for cases was 68.4% (95% CI: 43.5% - 87.4%). Since the low clinical sensitivity for the two CT-positive cases was not statistically significant due to the small number of samples analyzed, it was judged necessary to secure and analyze more samples in the future. Regarding the clinical specificity analysis results for 19 CT-negative cases, there were a large number of patients who were actually clinically diagnosed with mild traumatic brain injury but actually received a CT-negative result, and about 31.6% of them showed abnormal results on VEUPLEXTM TBI assay. Although traumatic brain injury could not be detected in 31.6% of the CT scans, the possibility of actually suffering a mild brain injury could not be ruled out, so it was judged that this could be confirmed through follow-up observation of the patient. In addition, among patients with mild traumatic brain injury, CT examinations were not performed in many cases because the symptoms were very mild, but among these patients, about 25% or more showed abnormal results in the VEUPLEXTM TBI assay. In fact, no damage is observed with the naked eye immediately after traumatic brain injury, and traumatic brain injury is not observed even on CT. But in some cases, brain hemorrhage may occur (delayed cerebral hemorrhage) after a certain period of time, so the patients who did show abnormal results on VEUPLEXTM TBI assay should be followed up for the delayed cerebral hemorrhage. In conclusion, it was judged that it was difficult to judge mild traumatic brain injury with the VEUPLEXTM TBI assay only through clinical findings without CT results, that is, based on the GCS value. Even in the case of CT, it does not detect all mild traumatic brain injury, so it is difficult to necessarily judge that there is no traumatic brain injury, even if there is no evidence of traumatic brain injury in CT. And in the long term, more patients should be included to evaluate the usefulness of the VEUPLEXTM TBI assay in the detection of microscopic traumatic brain injuries without using CT.Keywords: brain injury, traumatic brain injury, GFAP, UCHL1
Procedia PDF Downloads 1065708 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions
Authors: Aidan Battison, Neliswa Mama
Abstract:
Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole
Procedia PDF Downloads 1635707 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 2285706 Pre-Implementation of Total Body Irradiation Using Volumetric Modulated Arc Therapy: Full Body Anthropomorphic Phantom Development
Authors: Susana Gonçalves, Joana Lencart, Anabela Gregório Dias
Abstract:
Introduction: In combination with chemotherapy, Total Body Irradiation (TBI) is most used as part of the conditioning regimen prior to allogeneic hematopoietic stem cell transplantation. Conventional TBI techniques have a long application time but non-conformality of beam-application with the inability to individually spare organs at risk. Our institution’s intention is to start using Volumetric Modulated Arc Therapy (VMAT) techniques to increase homogeneity of delivered radiation. As a first approach, a dosimetric plan was performed on a computed tomography (CT) scan of a Rando Alderson antropomorfic phantom (head and torso), using a set of six arcs distributed along the phantom. However, a full body anthropomorphic phantom is essential to carry out technique validation and implementation. Our aim is to define the physical and chemical characteristics and the ideal manufacturing procedure of upper and lower limbs to our anthropomorphic phantom, for later validate TBI using VMAT. Materials and Methods: To study the better fit between our phantom and limbs, a CT scan of Rando Alderson anthropomorphic phantom was acquired. CT was performed on GE Healthcare equipment (model Optima CT580 W), with slice thickness of 2.5 mm. This CT was also used to access the electronic density of soft tissue and bone through Hounsfield units (HU) analysis. Results: CT images were analyzed and measures were made for the ideal upper and lower limbs. Upper limbs should be build under the following measures: 43cm length and 7cm diameter (next to the shoulder section). Lower limbs should be build under the following measures: 79cm length and 16.5cm diameter (next to the thigh section). As expected, soft tissue and bone have very different electronic density. This is important to choose and analyze different materials to better represent soft tissue and bone characteristics. The approximate HU values of the soft tissue and for bone shall be 35HU and 250HU, respectively. Conclusion: At the moment, several compounds are being developed based on different types of resins and additives in order to be able to control and mimic the various constituent densities of the tissues. Concurrently, several manufacturing techniques are being explored to make it possible to produce the upper and lower limbs in a simple and non-expensive way, in order to finally carry out a systematic and appropriate study of the total body irradiation. This preliminary study was a good starting point to demonstrate the feasibility of TBI with VMAT.Keywords: TBI, VMAT, anthropomorphic phantom, tissue equivalent materials
Procedia PDF Downloads 805705 The Application of Lesson Study Model in Writing Review Text in Junior High School
Authors: Sulastriningsih Djumingin
Abstract:
This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.Keywords: application, lesson study, review text, writing
Procedia PDF Downloads 2035704 Empowering Transformers for Evidence-Based Medicine
Authors: Jinan Fiaidhi, Hashmath Shaik
Abstract:
Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers
Procedia PDF Downloads 475703 Overview of Pre-Analytical Lab Errors in a Tertiary Care Hospital at Rawalpindi, Pakistan
Authors: S. Saeed, T. Butt, M. Rehan, S. Khaliq
Abstract:
Objective: To determine the frequency of pre-analytical errors in samples taken from patients for various lab tests at Fauji Foundation Hospital, Rawalpindi. Material and Methods: All the lab specimens for diagnostic purposes received at the lab from Fauji Foundation hospital, Rawalpindi indoor and outdoor patients were included. Total number of samples received in the lab is recorded in the computerized program made for the hospital. All the errors observed for pre-analytical process including patient identification, sampling techniques, test collection procedures, specimen transport/processing and storage were recorded in the log book kept for the purpose. Results: A total of 476616 specimens were received in the lab during the period of study including 237931 and 238685 from outdoor and indoor patients respectively. Forty-one percent of the samples (n=197976) revealed pre-analytical discrepancies. The discrepancies included Hemolyzed samples (34.8%), Clotted blood (27.8%), Incorrect samples (17.4%), Unlabeled samples (8.9%), Insufficient specimens (3.9%), Request forms without authorized signature (2.9%), Empty containers (3.9%) and tube breakage during centrifugation (0.8%). Most of these pre-analytical discrepancies were observed in samples received from the wards revealing that inappropriate sample collection by the medical staff of the ward, as most of the outdoor samples are collected by the lab staff who are properly trained for sample collection. Conclusion: It is mandatory to educate phlebotomists and paramedical staff particularly performing duties in the wards regarding timing and techniques of sampling/appropriate container to use/early delivery of the samples to the lab to reduce pre-analytical errors.Keywords: pre analytical lab errors, tertiary care hospital, hemolyzed, paramedical staff
Procedia PDF Downloads 2045702 Preparation of Conductive Composite Fiber by the Reduction of Silver Particles onto Hydrolyzed Polyacrylonitrile Fiber
Authors: Z. Okay, M. Kalkan Erdoğan, M. Şahin, M. Saçak
Abstract:
Polyacrylonitrile (PAN) is one of the most common and cheap fiber-forming polymers because of its high strength and high abrasion resistance properties. The result of alkaline hydrolysis of PAN fiber could be formed the products with conjugated sequences of –C=N–, acrylamide, sodium acrylate, and amidine. In this study, PAN fiber was hydrolyzed in a solution of sodium hydroxide, and this hydrolyzed PAN (HPAN) fiber was used to prepare conductive composite fiber by silver particles. The electrically conductive PAN fiber has the usage potential to produce variety of materials such as antistatic materials, life jackets and static charge reducing products. We monitored the change in the weight loss values of the PAN fiber with hydrolysis time. It was observed that a 60 % of weight loss was obtained in the fiber weight after 7h hydrolysis under the investigated conditions, but the fiber lost its fibrous structure. The hydrolysis time of 5h was found to be suitable in terms of preserving its fibrous structure. The change in the conductivity values of the composite with the preparation conditions such as hydrolysis time, silver ion concentration was studied. PAN fibers with different degrees of hydrolysis were treated with aqueous solutions containing different concentrations of silver ions by continuous stirring at 20 oC for 30 min, and the composite having the maximum conductivity of 2 S/cm could be prepared. The antibacterial property of the conductive HPAN fibers participated silver was also investigated. While the hydrolysis of the PAN fiber was characterized with FTIR and SEM techniques, the silver reduction process of the HPAN fiber was investigated with SEM and TGA-DTA techniques. The SEM micrographs showed that the surface of HPAN fiber was rougher and much more corroded than that of the PAN fiber. Composite, Conducting polymer, Fiber, Polyacrylonitrile.Keywords: composite, conducting polymer, fiber, polyacrylonitrile
Procedia PDF Downloads 4795701 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection
Authors: Mahshid Arabi
Abstract:
With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.Keywords: data protection, digital technologies, information security, modern management
Procedia PDF Downloads 345700 Selection of Most Appropriate Poplar and Willow Cultivars for Landfill Remediation Using Plant Physiology Parameters
Authors: Andrej Pilipović, Branislav Kovačević, Marina Milović, Lazar Kesić, Saša Pekeč, Leopold Poljaković-Pajnik, Saša Orlović
Abstract:
The effect of landfills on the environment reflects in the dispersion of the contaminants on surrounding soils by the groundwater plume. Such negative effect can be mitigated with the establishment of vegetative buffers surrounding landfills. The “TreeRemEnergy” project funded by the Science Fund of Republic of Serbia – Green program focuses on development of phytobuffers for landfill phytoremediation with the use of Short Rotation Woody Crops (SRWC) plantations that can be further used for the biomass for energy. One of the goals of the project is to select most appropriate poplar (Populus sp.) and willow (Salix sp.) clones through phytorecurrent selection that involves testing of various breeding traits. Physiological parameters serve as a significant contribution to the breeding process aimed to early detection of potential candidates. This study involved testing of the effect of the landfill soils on the photosynthetic processes of the selected poplar and willow candidates. For this purpose, measurements of the gas exchange, chlorophyll content and chlorophyll fluorescence were measured on the tested plants. Obtained results showed that there were differences in the influence of the controlled sources of variation on examined physiological parameters. The effect of clone was significant in all parameters, while the effect of the substrate was not statistically significant in any of measured parameters. However, the effect of interaction Clone×Substrate was significant in intercellular CO2 concentration(ci), stomatal conductance (gs) and transpiration rate (E), suggesting that water regime of the tested clones showed different response to the tested soils. Some clones showed more “generalist” behavior (380, 107/65/9, and PE19/66), while “specialist” behavior was recorded in clones PE4/68, S1-8, and 79/64/2. On the other hand, there was no significant effect of the tested substrate on the pigments content measured with SPAD meter. Results of this study allowed us to narrow the group of clones for further trails in field conditions.Keywords: clones, net photosynthesis, WUE, transpiration, stomatal conductance, SPAD
Procedia PDF Downloads 675699 Homogenization of a Non-Linear Problem with a Thermal Barrier
Authors: Hassan Samadi, Mustapha El Jarroudi
Abstract:
In this work, we consider the homogenization of a non-linear problem in periodic medium with two periodic connected media exchanging a heat flux throughout their common interface. The interfacial exchange coefficient λ is assumed to tend to zero or to infinity following a rate λ=λ(ε) when the size ε of the basic cell tends to zero. Three homogenized problems are determined according to some critical value depending of λ and ε. Our method is based on Γ-Convergence techniques.Keywords: variational methods, epiconvergence, homogenization, convergence technique
Procedia PDF Downloads 5255698 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements
Authors: M. A. García, J. Vinolas, A. Hernando
Abstract:
Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.Keywords: magnetoelastic, magnetic induction, mechanical stress, steel
Procedia PDF Downloads 515697 Design, Synthesis and Evaluation of 4-(Phenylsulfonamido)Benzamide Derivatives as Selective Butyrylcholinesterase Inhibitors
Authors: Sushil Kumar Singh, Ashok Kumar, Ankit Ganeshpurkar, Ravi Singh, Devendra Kumar
Abstract:
In spectrum of neurodegenerative diseases, Alzheimer’s disease (AD) is characterized by the presence of amyloid β plaques and neurofibrillary tangles in the brain. It results in cognitive and memory impairment due to loss of cholinergic neurons, which is considered to be one of the contributing factors. Donepezil, an acetylcholinesterase (AChE) inhibitor which also inhibits butyrylcholinesterase (BuChE) and improves the memory and brain’s cognitive functions, is the most successful and prescribed drug to treat the symptoms of AD. The present work is based on designing of the selective BuChE inhibitors using computational techniques. In this work, machine learning models were trained using classification algorithms followed by screening of diverse chemical library of compounds. The various molecular modelling and simulation techniques were used to obtain the virtual hits. The amide derivatives of 4-(phenylsulfonamido) benzoic acid were synthesized and characterized using 1H & 13C NMR, FTIR and mass spectrometry. The enzyme inhibition assays were performed on equine plasma BuChE and electric eel’s AChE by method developed by Ellman et al. Compounds 31, 34, 37, 42, 49, 52 and 54 were found to be active against equine BuChE. N-(2-chlorophenyl)-4-(phenylsulfonamido)benzamide and N-(2-bromophenyl)-4-(phenylsulfonamido)benzamide (compounds 34 and 37) displayed IC50 of 61.32 ± 7.21 and 42.64 ± 2.17 nM against equine plasma BuChE. Ortho-substituted derivatives were more active against BuChE. Further, the ortho-halogen and ortho-alkyl substituted derivatives were found to be most active among all with minimal AChE inhibition. The compounds were selective toward BuChE.Keywords: Alzheimer disease, butyrylcholinesterase, machine learning, sulfonamides
Procedia PDF Downloads 1405696 Appropriation of Cryptocurrencies as a Payment Method by South African Retailers
Authors: Neliswa Dyosi
Abstract:
Purpose - Using an integrated Technology-Organization-Environment (TOE) framework and the model of technology appropriation (MTA) as a theoretical lens, this interpretive qualitative study seeks to understand and explain the factors that influence the appropriation, non-appropriation, and disappropriation of bitcoin as a payment method by South African retailers. Design/methodology/approach –The study adopts the interpretivist philosophical paradigm. Multiple case studies will be adopted as a research strategy. For data collection, the study follows a qualitative approach. Qualitative data will be collected from the six retailers in various industries. Semi-structured interviews and documents will be used as the data collection techniques. Purposive and snowballing sampling techniques will be used to identify participants within the organizations. Data will be analyzed using thematic analysis. Originality/value - Using the deduction approach, the study seeks to provide a descriptive and explanatory contribution to theory. The study contributes to theory development by integrating the MTA and TOE frameworks as a means to understand technology adoption behaviors of organizations, in this case, retailers. This is also the first study that looks at an integrated approach of the Technology-Organization-Environment (TOE) framework and the MTA framework to understand the adoption and use of a payment method. South Africa is ranked amongst the top ten countries in the world on cryptocurrency adoption. There is, however, still a dearth of literature on the current state of adoption and usage of bitcoin as a payment method in South Africa. The study will contribute to the existing literature as bitcoin cryptocurrency is gaining popularity as an alternative payment method across the globe.Keywords: cryptocurrency, bitcoin, payment methods, blockchain, appropriation, online retailers, TOE framework, disappropriation, non-appropriation
Procedia PDF Downloads 1375695 Physical, Chemical and Mineralogical Characterization of Construction and Demolition Waste Produced in Greece
Authors: C. Alexandridou, G. N. Angelopoulos, F. A. Coutelieris
Abstract:
Construction industry in Greece consumes annually more than 25 million tons of natural aggregates originating mainly from quarries. At the same time, more than 2 million tons of construction and demolition waste are deposited every year, usually without control, therefore increasing the environmental impact of this sector. A potential alternative for saving natural resources and minimize landfilling, could be the recycling and re-use of Concrete and Demolition Waste (CDW) in concrete production. Moreover, in order to conform to the European legislation, Greece is obliged to recycle non-hazardous construction and demolition waste to a minimum of 70% by 2020. In this paper characterization of recycled materials - commercially and laboratory produced, coarse and fine, Recycled Concrete Aggregates (RCA) - has been performed. Namely, X-Ray Fluorescence and X-ray diffraction (XRD) analysis were used for chemical and mineralogical analysis respectively. Physical properties such as particle density, water absorption, sand equivalent and resistance to fragmentation were also determined. This study, first time made in Greece, aims at outlining the differences between RCA and natural aggregates and evaluating their possible influence in concrete performance. Results indicate that RCA’s chemical composition is enriched in Si, Al, and alkali oxides compared to natural aggregates. X-ray diffraction (XRD) analyses results indicated the presence of calcite, quartz and minor peaks of mica and feldspars. From all the evaluated physical properties of coarse RCA, only water absorption and resistance to fragmentation seem to have a direct influence on the properties of concrete. Low Sand Equivalent and significantly high water absorption values indicate that fine fractions of RCA cannot be used for concrete production unless further processed. Chemical properties of RCA in terms of water soluble ions are similar to those of natural aggregates. Four different concrete mixtures were produced and examined, replacing natural coarse aggregates with RCA by a ratio of 0%, 25%, 50% and 75% respectively. Results indicate that concrete mixtures containing recycled concrete aggregates have a minor deterioration of their properties (3-9% lower compression strength at 28 days) compared to conventional concrete containing the same cement quantity.Keywords: chemical and physical characterization, compressive strength, mineralogical analysis, recycled concrete aggregates, waste management
Procedia PDF Downloads 2345694 Land Cover, Land Surface Temperature, and Urban Heat Island Effects in Tropical Sub Saharan City of Accra
Authors: Eric Mensah
Abstract:
The effects of rapid urbanisation of tropical sub-Saharan developing cities on local and global climate are of great concern due to the negative impacts of Urban Heat Island (UHI) effects. The importance of urban parks, vegetative cover and forest reserves in these tropical cities have been undervalued with a rapid degradation and loss of these vegetative covers to urban developments which continue to cause an increase in daily mean temperatures and changes to local climatic conditions. Using Landsat data of the same months and period intervals, the spatial variations of land cover changes, temperature, and vegetation were examined to determine how vegetation improves local temperature and the effects of urbanisation on daily mean temperatures over the past 12 years. The remote sensing techniques of maximum likelihood supervised classification, land surface temperature retrieval technique, and normalised differential vegetation index techniques were used to analyse and create the land use land cover (LULC), land surface temperature (LST), and vegetation and non-vegetation cover maps respectively. Results from the study showed an increase in daily mean temperature by 0.80 °C as a result of rapid increase in urban area by 46.13 sq. km and loss of vegetative cover by 46.24 sq. km between 2005 and 2017. The LST map also shows the existence of UHI within the urban areas of Accra, the potential mitigating effects offered by the existence of forest and vegetative cover as demonstrated by the existence of cool islands around the Achimota ecological forest and University of Ghana botanical gardens areas.Keywords: land surface temperature, climate, remote sensing, urbanisation
Procedia PDF Downloads 3215693 Predicting the Human Impact of Natural Onset Disasters Using Pattern Recognition Techniques and Rule Based Clustering
Authors: Sara Hasani
Abstract:
This research focuses on natural sudden onset disasters characterised as ‘occurring with little or no warning and often cause excessive injuries far surpassing the national response capacities’. Based on the panel analysis of the historic record of 4,252 natural onset disasters between 1980 to 2015, a predictive method was developed to predict the human impact of the disaster (fatality, injured, homeless) with less than 3% of errors. The geographical dispersion of the disasters includes every country where the data were available and cross-examined from various humanitarian sources. The records were then filtered into 4252 records of the disasters where the five predictive variables (disaster type, HDI, DRI, population, and population density) were clearly stated. The procedure was designed based on a combination of pattern recognition techniques and rule-based clustering for prediction and discrimination analysis to validate the results further. The result indicates that there is a relationship between the disaster human impact and the five socio-economic characteristics of the affected country mentioned above. As a result, a framework was put forward, which could predict the disaster’s human impact based on their severity rank in the early hours of disaster strike. The predictions in this model were outlined in two worst and best-case scenarios, which respectively inform the lower range and higher range of the prediction. A necessity to develop the predictive framework can be highlighted by noticing that despite the existing research in literature, a framework for predicting the human impact and estimating the needs at the time of the disaster is yet to be developed. This can further be used to allocate the resources at the response phase of the disaster where the data is scarce.Keywords: disaster management, natural disaster, pattern recognition, prediction
Procedia PDF Downloads 1545692 Competing Risks Modeling Using within Node Homogeneity Classification Tree
Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya
Abstract:
To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree
Procedia PDF Downloads 2735691 Erector Spine Plane Block versus Para Vertebral Block in Brest Surgery
Authors: Widad Kouachi, Nacera Benmouhoub
Abstract:
Background: Erector spinae plane block (ESP) and thoracic paravertebral block (PVB) are two widely used regional anesthesia techniques in breast cancer surgery. Both techniques aim to improve postoperative pain management and reduce opioid consumption. However, comparative data on their efficacy in oncologic breast surgery remains limited. Objectives: This study aims to compare the efficacy of ESP and PVB in postoperative pain control, patient satisfaction, and opioid consumption in breast cancer surgery. Methods: A randomized, double-blind trial was conducted involving 100 patients undergoing oncologic breast surgery. Patients were randomly assigned to two groups: 50 received ESP, and 50 received PVB. Postoperative pain scores (at rest and during movement), opioid consumption, patient satisfaction, and hospital length of stay were recorded and analyzed. Results: Both ESP and PVB provided effective postoperative analgesia. No significant difference in pain scores was observed between the two groups within the first 24 hours. However, ESP showed a notable advantage in managing chronic postoperative pain at the 6-month follow-up. Opioid consumption was lower in both groups compared to patients without a block. No significant differences in complication rates or hospital stay were noted between the groups. Conclusion: ESP and PVB offer comparable efficacy for immediate postoperative pain control in breast cancer surgery. Nevertheless, ESP may have a superior role in managing long-term pain. Further research is needed to explore the mechanisms behind the observed differences in chronic pain outcomes.Keywords: pain assessment, brest surgery, bpv block, ESP block
Procedia PDF Downloads 325690 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 1745689 Synchrotron Based Techniques for the Characterization of Chemical Vapour Deposition Overgrowth Diamond Layers on High Pressure, High Temperature Substrates
Authors: T. N. Tran Thi, J. Morse, C. Detlefs, P. K. Cook, C. Yıldırım, A. C. Jakobsen, T. Zhou, J. Hartwig, V. Zurbig, D. Caliste, B. Fernandez, D. Eon, O. Loto, M. L. Hicks, A. Pakpour-Tabrizi, J. Baruchel
Abstract:
The ability to grow boron-doped diamond epilayers of high crystalline quality is a prerequisite for the fabrication of diamond power electronic devices, in particular high voltage diodes and metal-oxide-semiconductor (MOS) transistors. Boron and intrinsic diamond layers are homoepitaxially overgrown by microwave assisted chemical vapour deposition (MWCVD) on single crystal high pressure, high temperature (HPHT) grown bulk diamond substrates. Various epilayer thicknesses were grown, with dopant concentrations ranging from 1021 atom/cm³ at nanometer thickness in the case of 'delta doping', up 1016 atom/cm³ and 50µm thickness or high electric field drift regions. The crystalline quality of these overgrown layers as regards defects, strain, distortion… is critical for the device performance through its relation to the final electrical properties (Hall mobility, breakdown voltage...). In addition to the optimization of the epilayer growth conditions in the MWCVD reactor, other important questions related to the crystalline quality of the overgrown layer(s) are: 1) what is the dependence on the bulk quality and surface preparation methods of the HPHT diamond substrate? 2) how do defects already present in the substrate crystal propagate into the overgrown layer; 3) what types of new defects are created during overgrowth, what are their growth mechanisms, and how can these defects be avoided? 4) how can we relate in a quantitative manner parameters related to the measured crystalline quality of the boron doped layer to the electronic properties of final processed devices? We describe synchrotron-based techniques developed to address these questions. These techniques allow the visualization of local defects and crystal distortion which complements the data obtained by other well-established analysis methods such as AFM, SIMS, Hall conductivity…. We have used Grazing Incidence X-ray Diffraction (GIXRD) at the ID01 beamline of the ESRF to study lattice parameters and damage (strain, tilt and mosaic spread) both in diamond substrate near surface layers and in thick (10–50 µm) overgrown boron doped diamond epi-layers. Micro- and nano-section topography have been carried out at both the BM05 and ID06-ESRF) beamlines using rocking curve imaging techniques to study defects which have propagated from the substrate into the overgrown layer(s) and their influence on final electronic device performance. These studies were performed using various commercially sourced HPHT grown diamond substrates, with the MWCVD overgrowth carried out at the Fraunhofer IAF-Germany. The synchrotron results are in good agreement with low-temperature (5°K) cathodoluminescence spectroscopy carried out on the grown samples using an Inspect F5O FESEM fitted with an IHR spectrometer.Keywords: synchrotron X-ray diffaction, crystalline quality, defects, diamond overgrowth, rocking curve imaging
Procedia PDF Downloads 2625688 Aerodynamic Analysis by Computational Fluids Dynamics in Building: Case Study
Authors: Javier Navarro Garcia, Narciso Vazquez Carretero
Abstract:
Eurocode 1, part 1-4, wind actions, includes in its article 1.5 the possibility of using numerical calculation methods to obtain information on the loads acting on a building. On the other hand, the analysis using computational fluids dynamics (CFD) in aerospace, aeronautical, and industrial applications is already in widespread use. The application of techniques based on CFD analysis on the building to study its aerodynamic behavior now opens a whole alternative field of possibilities for civil engineering and architecture; optimization of the results with respect to those obtained by applying the regulations, the possibility of obtaining information on pressures, speeds at any point of the model for each moment, the analysis of turbulence and the possibility of modeling any geometry or configuration. The present work compares the results obtained on a building, with respect to its aerodynamic behavior, from a mathematical model based on the analysis by CFD with the results obtained by applying Eurocode1, part1-4, wind actions. It is verified that the results obtained by CFD techniques suppose an optimization of the wind action that acts on the building with respect to the wind action obtained by applying the Eurocode1, part 1-4, wind actions. In order to carry out this verification, a 45m high square base truncated pyramid building has been taken. The mathematical model on CFD, based on finite volumes, has been calculated using the FLUENT commercial computer application using a scale-resolving simulation (SRS) type large eddy simulation (LES) turbulence model for an atmospheric boundary layer wind with turbulent component in the direction of the flow.Keywords: aerodynamic, CFD, computacional fluids dynamics, computational mechanics
Procedia PDF Downloads 1385687 Implementation of the Quality Management System and Development of Organizational Learning: Case of Three Small and Medium-Sized Enterprises in Morocco
Authors: Abdelghani Boudiaf
Abstract:
The profusion of studies relating to the concept of organizational learning shows the importance that has been given to this concept in the management sciences. A few years ago, companies leaned towards ISO 9001 certification; this requires the implementation of the quality management system (QMS). In order for this objective to be achieved, companies must have a set of skills, which pushes them to develop learning through continuous training. The results of empirical research have shown that implementation of the QMS in the company promotes the development of learning. It should also be noted that several types of learning are developed in this sense. Given the nature of skills development is normative in the context of the quality demarche, companies are obliged to qualify and improve the skills of their human resources. Continuous training is the keystone to develop the necessary learning. To carry out continuous training, companies need to be able to identify their real needs by developing training plans based on well-defined engineering. The training process goes obviously through several stages. Initially, training has a general aspect, that is to say, it focuses on topics and actions of a general nature. Subsequently, this is done in a more targeted and more precise way to accompany the evolution of the QMS and also to make the changes decided each time (change of working method, change of practices, change of objectives, change of mentality, etc.). To answer our problematic we opted for the method of qualitative research. It should be noted that the case study method crosses several data collection techniques to explain and understand a phenomenon. Three cases of companies were studied as part of this research work using different data collection techniques related to this method.Keywords: changing mentalities, continuing training, organizational learning, quality management system, skills development
Procedia PDF Downloads 1105686 Magnetophotonics 3D MEMS/NEMS System for Quantitative Mitochondrial DNA Defect Profiling
Authors: Dar-Bin Shieh, Gwo-Bin Lee, Chen-Ming Chang, Chen Sheng Yeh, Chih-Chia Huang, Tsung-Ju Li
Abstract:
Mitochondrial defects have a significant impact in many human diseases and aging associated phenotypes. The pathogenic mitochondrial DNA (mtDNA) mutations are diverse and usually present as heteroplasmic. mtDNA 4977bps deletion is one of the common mtDNA defects, and the ratio of mutated versus normal copy is significantly associated with clinical symptoms thus their quantitative detection has become an important unmet needs for advanced disease diagnosis and therapeutic guidelines. This study revealed a Micro-electro-mechanical-system (MEMS) enabled automatic microfluidic chip that only required minimal sample. The system integrated multiple laboratory operation steps into a Lab-on-a-Chip for high-sensitive and prompt measurement. The entire process including magnetic nanoparticle based mtDNA extraction in chip, mutation selective photonic DNA cleavage, and nanoparticle accelerated photonic quantitative polymerase chain reaction (qPCR). All subsystems were packed inside a miniature three-dimensional micro structured system and operated in an automatic manner. Integration of magnetic beads with microfluidic transportation could promptly extract and enrich the specific mtDNA. The near infrared responsive magnetic nanoparticles enabled micro-PCR to be operated by pulse-width-modulation controlled laser pulsing to amplify the desired mtDNA while quantified by fluorescence intensity captured by a complementary metal oxide system array detector. The proportions of pathogenic mtDNA in total DNA were thus obtained. Micro capillary electrophoresis module was used to analyze the amplicone products. In conclusion, this study demonstrated a new magnetophotonic based qPCR MEMS system that successfully detects and quantify specific disease related DNA mutations thus provides a promising future for rapid diagnosis of mitochondria diseases.Keywords: mitochondrial DNA, micro-electro-mechanical-system, magnetophotonics, PCR
Procedia PDF Downloads 2205685 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 1115684 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes
Procedia PDF Downloads 445683 The Colorectal Cancer in Patients of Eastern Algeria
Authors: S. Tebibel, C. Mechati, S. Messaoudi
Abstract:
Algeria is currently experiencing the same rate of cancer progression as that registered these last years in the western countries. Colorectal cancer, constituting increasingly a major public health problem, is the most common form of cancer after breast and Neck-womb cancer at the woman and prostate cancer at the man. Our work is based on a retrospective study to determine the cases of colorectal cancer through eastern Algeria. Our goal is to carry out an epidemiological, histological and immune- histochemical study to investigate different techniques for the diagnosis of colorectal cancer and their interests and specific in detecting the disease. The study includes 110 patients (aged between 20 to 87 years) with colorectal cancer where the inclusions and exclusions criteria were established. In our study, colorectal cancer, expresses a male predominance, with a sex ratio of 1, 99 and the most affected age group is between 50 and 59 years. We noted that the colon cancer rate is higher than rectal cancer rate, whose frequencies are respectively 60,91 % and 39,09 %. In the series of colon cancer, the ADK lieberkunien is histological the most represented type, or 85,07 % of all cases. In contrast, the proportion of ADK mucinous (colloid mucous) is only 1,49% only. Well-differentiated ADKS, are very significant in our series, they represent 83,58 % of cases. Adenocarcinoma moderately and poorly differentiated, whose proportions are respectively 2,99 % and 0.05 %. For histological varieties of rectal ADK, we see in our workforce that ADK lieberkunien represent the most common histological form, or 76,74%, while the mucosal colloid is 13,95 %. Research of the mutation on the gene encoding K-ras, a major step in the targeted therapy of colorectal cancers, is underway in our study. Colorectal cancer is the subject of much promising research concern: the evaluation of new therapies (antiangiogenic monoclonal antibodies), the search for predictors of sensitivity to chemotherapy and new prognostic markers using techniques of molecular biology and proteomics.Keywords: adenocarcinoma, age, colorectal cancer, epidemiology, histological section, sex
Procedia PDF Downloads 3455682 Dosimetric Comparison among Different Head and Neck Radiotherapy Techniques Using PRESAGE™ Dosimeter
Authors: Jalil ur Rehman, Ramesh C. Tailor, Muhammad Isa Khan, Jahnzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
Purpose: The purpose of this analysis was to investigate dose distribution of different techniques (3D-CRT, IMRT and VMAT) of head and neck cancer using 3-dimensional dosimeter called PRESAGETM Dosimeter. Materials and Methods: Computer tomography (CT) scans of radiological physics center (RPC) head and neck anthropomorphic phantom with both RPC standard insert and PRESAGETM insert were acquired separated with Philipp’s CT scanner and both CT scans were exported via DICOM to the Pinnacle version 9.4 treatment planning system (TPS). Each plan was delivered twice to the RPC phantom first containing the RPC standard insert having TLD and film dosimeters and then again containing the Presage insert having 3-D dosimeter (PRESAGETM) by using a Varian True Beam linear accelerator. After irradiation, the standard insert including point dose measurements (TLD) and planar Gafchromic® EBT film measurement were read using RPC standard procedure. The 3D dose distribution from PRESAGETM was read out with the Duke Midsized optical scanner dedicated to RPC (DMOS-RPC). Dose volume histogram (DVH), mean and maximal doses for organs at risk were calculated and compared among each head and neck technique. The prescription dose was same for all head and neck radiotherapy techniques which was 6.60 Gy/friction. Beam profile comparison and gamma analysis were used to quantify agreements among film measurement, PRESAGETM measurement and calculated dose distribution. Quality assurances of all plans were performed by using ArcCHECK method. Results: VMAT delivered the lowest mean and maximum doses to organ at risk (spinal cord, parotid) than IMRT and 3DCRT. Such dose distribution was verified by absolute dose distribution using thermoluminescent dosimeter (TLD) system. The central axial, sagittal and coronal planes were evaluated using 2D gamma map criteria(± 5%/3 mm) and results were 99.82% (axial), 99.78% (sagital), 98.38% (coronal) for VMAT plan and found the agreement between PRESAGE and pinnacle was better than IMRT and 3D-CRT plan excludes a 7 mm rim at the edge of the dosimeter. Profile showed good agreement for all plans between film, PRESAGE and pinnacle and 3D gamma was performed for PTV and OARs, VMAT and 3DCRT endow with better agreement than IMRT. Conclusion: VMAT delivered lowered mean and maximal doses to organs at risk and better PTV coverage during head and neck radiotherapy. TLD, EBT film and PRESAGETM dosimeters suggest that VMAT was better for the treatment of head and neck cancer than IMRT and 3D-CRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD, PRESAGETM
Procedia PDF Downloads 3975681 The Effect of Corporate Governance on Financial Stability and Solvency Margin for Insurance Companies in Jordan
Authors: Ghadeer A.Al-Jabaree, Husam Aldeen Al-Khadash, M. Nassar
Abstract:
This study aimed at investigating the effect of well-designed corporate governance system on the financial stability of insurance companies listed in ASE. Further, this study provides a comprehensive model for evaluating and analyzing insurance companies' financial position and prospective for comparing the degree of corporate governance application provisions among Jordanian insurance companies. In order to achieve the goals of the study, a whole population that consist of (27) listed insurance companies was introduced through the variables of (board of director, audit committee, internal and external auditor, board and management ownership and block holder's identities). Statistical methods were used with alternative techniques by (SPSS); where descriptive statistical techniques such as means, standard deviations were used to describe the variables, while (F) test and ANOVA analysis of variance were used to test the hypotheses of the study. The study revealed the existence of significant effect of corporate governance variables except local companies that are not listed in ASE on financial stability within control variables especially debt ratio (leverage),where it's also showed that concentration in motor third party doesn't have significant effect on insurance companies' financial stability during study period. Moreover, the study concludes that Global financial crisis affect the investment side of insurance companies with insignificant effect on the technical side. Finally, some recommendations were presented such as enhancing the laws and regulation that help the appropriate application of corporate governance, and work on activating the transparency in the disclosures of the financial statements and focusing on supporting the technical provisions for the companies, rather than focusing only on profit side.Keywords: corporate governance, financial stability and solvency margin, insurance companies, Jordan
Procedia PDF Downloads 490