Search results for: methods of measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17047

Search results for: methods of measurements

12817 Development of a Direct Immunoassay for Human Ferritin Using Diffraction-Based Sensing Method

Authors: Joel Ballesteros, Harriet Jane Caleja, Florian Del Mundo, Cherrie Pascual

Abstract:

Diffraction-based sensing was utilized in the quantification of human ferritin in blood serum to provide an alternative to label-based immunoassays currently used in clinical diagnostics and researches. The diffraction intensity was measured by the diffractive optics technology or dotLab™ system. Two methods were evaluated in this study: direct immunoassay and direct sandwich immunoassay. In the direct immunoassay, human ferritin was captured by human ferritin antibodies immobilized on an avidin-coated sensor while the direct sandwich immunoassay had an additional step for the binding of a detector human ferritin antibody on the analyte complex. Both methods were repeatable with coefficient of variation values below 15%. The direct sandwich immunoassay had a linear response from 10 to 500 ng/mL which is wider than the 100-500 ng/mL of the direct immunoassay. The direct sandwich immunoassay also has a higher calibration sensitivity with value 0.002 Diffractive Intensity (ng mL-1)-1) compared to the 0.004 Diffractive Intensity (ng mL-1)-1 of the direct immunoassay. The limit of detection and limit of quantification values of the direct immunoassay were found to be 29 ng/mL and 98 ng/mL, respectively, while the direct sandwich immunoassay has a limit of detection (LOD) of 2.5 ng/mL and a limit of quantification (LOQ) of 8.2 ng/mL. In terms of accuracy, the direct immunoassay had a percent recovery of 88.8-93.0% in PBS while the direct sandwich immunoassay had 94.1 to 97.2%. Based on the results, the direct sandwich immunoassay is a better diffraction-based immunoassay in terms of accuracy, LOD, LOQ, linear range, and sensitivity. The direct sandwich immunoassay was utilized in the determination of human ferritin in blood serum and the results are validated by Chemiluminescent Magnetic Immunoassay (CMIA). The calculated Pearson correlation coefficient was 0.995 and the p-values of the paired-sample t-test were less than 0.5 which show that the results of the direct sandwich immunoassay was comparable to that of CMIA and could be utilized as an alternative analytical method.

Keywords: biosensor, diffraction, ferritin, immunoassay

Procedia PDF Downloads 337
12816 A Wearable Device to Overcome Post–Stroke Learned Non-Use; The Rehabilitation Gaming System for wearables: Methodology, Design and Usability

Authors: Javier De La Torre Costa, Belen Rubio Ballester, Martina Maier, Paul F. M. J. Verschure

Abstract:

After a stroke, a great number of patients experience persistent motor impairments such as hemiparesis or weakness in one entire side of the body. As a result, the lack of use of the paretic limb might be one of the main contributors to functional loss after clinical discharge. We aim to reverse this cycle by promoting the use of the paretic limb during activities of daily living (ADLs). To do so, we describe the key components of a system that is composed of a wearable bracelet (i.e., a smartwatch) and a mobile phone, designed to bring a set of neurorehabilitation principles that promote acquisition, retention and generalization of skills to the home of the patient. A fundamental question is whether the loss in motor function derived from learned–non–use may emerge as a consequence of decision–making processes for motor optimization. Our system is based on well-established rehabilitation strategies that aim to reverse this behaviour by increasing the reward associated with action execution as well as implicitly reducing the expected cost associated with the use of the paretic limb, following the notion of the reinforcement–induced movement therapy (RIMT). Here we validate an accelerometer–based measure of arm use, and its capacity to discriminate different activities that require increasing movement of the arm. We also show how the system can act as a personalized assistant by providing specific goals and adjusting them depending on the performance of the patients. The usability and acceptance of the device as a rehabilitation tool is tested using a battery of self–reported and objective measurements obtained from acute/subacute patients and healthy controls. We believe that an extension of these technologies will allow for the deployment of unsupervised rehabilitation paradigms during and beyond the hospitalization time.

Keywords: stroke, wearables, learned non use, hemiparesis, ADLs

Procedia PDF Downloads 196
12815 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 84
12814 Synthesis of Highly Sensitive Molecular Imprinted Sensor for Selective Determination of Doxycycline in Honey Samples

Authors: Nadia El Alami El Hassani, Soukaina Motia, Benachir Bouchikhi, Nezha El Bari

Abstract:

Doxycycline (DXy) is a cycline antibiotic, most frequently prescribed to treat bacterial infections in veterinary medicine. However, its broad antimicrobial activity and low cost, lead to an intensive use, which can seriously affect human health. Therefore, its spread in the food products has to be monitored. The scope of this work was to synthetize a sensitive and very selective molecularly imprinted polymer (MIP) for DXy detection in honey samples. Firstly, the synthesis of this biosensor was performed by casting a layer of carboxylate polyvinyl chloride (PVC-COOH) on the working surface of a gold screen-printed electrode (Au-SPE) in order to bind covalently the analyte under mild conditions. Secondly, DXy as a template molecule was bounded to the activated carboxylic groups, and the formation of MIP was performed by a biocompatible polymer by the mean of polyacrylamide matrix. Then, DXy was detected by measurements of differential pulse voltammetry (DPV). A non-imprinted polymer (NIP) prepared in the same conditions and without the use of template molecule was also performed. We have noticed that the elaborated biosensor exhibits a high sensitivity and a linear behavior between the regenerated current and the logarithmic concentrations of DXy from 0.1 pg.mL−1 to 1000 pg.mL−1. This technic was successfully applied to determine DXy residues in honey samples with a limit of detection (LOD) of 0.1 pg.mL−1 and an excellent selectivity when compared to the results of oxytetracycline (OXy) as analogous interfering compound. The proposed method is cheap, sensitive, selective, simple, and is applied successfully to detect DXy in honey with the recoveries of 87% and 95%. Considering these advantages, this system provides a further perspective for food quality control in industrial fields.

Keywords: doxycycline, electrochemical sensor, food control, gold nanoparticles, honey, molecular imprinted polymer

Procedia PDF Downloads 299
12813 Periareolar Zigzag Incision in the Conservative Surgical Treatment of Breast Cancer

Authors: Beom-Seok Ko, Yoo-Seok Kim, Woo-Sung Lim, Ku-Sang Kim, Hyun-Ah Kim, Jin-Sun Lee, An-Bok Lee, Jin-Gu Bong, Tae-Hyun Kim, Sei-Hyun Ahn

Abstract:

Background: Breast conserving surgery (BCS) followed by radiation therapy is today standard therapy for early breast cancer. It is safe therapeutic procedure in early breast cancers, because it provides the same level of overall survival as mastectomy. There are a number of different types of incisions used to BCS. Avoiding scars on the breast is women’s desire. Numerous minimal approaches have evolved due to this concern. Periareolar incision is often used when the small tumor relatively close to the nipple. But periareolar incision has a disadvantages include limited exposure of the surgical field. In plastic surgery, various methods such as zigzag incisions have been recommended to achieve satisfactory esthetic results. Periareolar zigzag incision has the advantage of not only good surgical field but also contributed to better surgical scars. The purpose of this study was to evaluate the oncological safety of procedures by studying the status of the surgical margins of the excised tumor specimen and reduces the need for further surgery. Methods: Between January 2016 and September 2016, 148 women with breast cancer underwent BCS or mastectomy by the same surgeon in ASAN medical center. Patients with exclusion criteria were excluded from this study if they had a bilateral breast cancer or underwent resection of the other tumors or taken axillary dissection or performed other incision methods. Periareolar zigzag incision was performed and excision margins of the specimen were identified frozen sections and paraffin-embedded or permanent sections in all patients in this study. We retrospectively analyzed tumor characteristics, the operative time, size of specimen, the distance from the tumor to nipple. Results: A total of 148 patients were reviewed, 72 included in the final analysis, 76 excluded. The mean age of the patients was 52.6 (range 25-19 years), median tumor size was 1.6 cm (range, 0.2-8.8), median tumor distance from the nipple was 4.0 cm (range, 1.0-9.0), median excised specimen sized was 5.1 cm (range, 2.8-15.0), median operation time was 70.0 minute (range, 39-138). All patients were discharged with no sign of infection or skin necrosis. Free resection margin was confirmed by frozen biopsy and permanent biopsy in all samples. There were no patients underwent reoperation. Conclusions: We suggest that periareolar zigzag incision can provide a good surgical field to remove a relatively large tumor and may provide cosmetically good outcomes.

Keywords: periareolar zigzag incision, breast conserving surgery, breast cancer, resection margin

Procedia PDF Downloads 214
12812 Importance of Risk Assessment in Managers´ Decision-Making Process

Authors: Mária Hudáková, Vladimír Míka, Katarína Hollá

Abstract:

Making decisions is the core of management and a result of conscious activities which is under way in a particular environment and concrete conditions. The managers decide about the goals, procedures and about the methods how to respond to the changes and to the problems which developed. Their decisions affect the effectiveness, quality, economy and the overall successfulness in every organisation. In spite of this fact, they do not pay sufficient attention to the individual steps of the decision-making process. They emphasise more how to cope with the individual methods and techniques of making decisions and forget about the way how to cope with analysing the problem or assessing the individual solution variants. In many cases, the underestimating of the analytical phase can lead to an incorrect assessment of the problem and this can then negatively influence its further solution. Based on our analysis of the theoretical solutions by individual authors who are dealing with this area and the realised research in Slovakia and also abroad we can recognise an insufficient interest of the managers to assess the risks in the decision-making process. The goal of this paper is to assess the risks in the managers´ decision-making process relating to the conditions of the environment, to the subject’s activity (the manager’s personality), to the insufficient assessment of individual variants for solving the problems but also to situations when the arisen problem is not solved. The benefit of this paper is the effort to increase the need of the managers to deal with the risks during the decision-making process. It is important for every manager to assess the risks in his/her decision-making process and to make efforts to take such decisions which reflect the basic conditions, states and development of the environment in the best way and especially for the managers´ decisions to contribute to achieving the determined goals of the organisation as effectively as possible.

Keywords: risk, decision-making, manager, process, analysis, source of risk

Procedia PDF Downloads 253
12811 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 13
12810 African Swine Fewer Situation and Diagnostic Methods in Lithuania

Authors: Simona Pileviciene

Abstract:

On 24th January 2014, Lithuania notified two primary cases of African swine fever (ASF) in wild boars. The animals were tested positive for ASF virus (ASFV) genome by real-time PCR at the National Reference Laboratory for ASF in Lithuania (NRL), results were confirmed by the European Union Reference Laboratory for African swine fever (CISA-INIA). Intensive wild and domestic animal monitoring program was started. During the period of 2014-2017 ASF was confirmed in two large commercial pig holding with the highest biosecurity. Pigs were killed and destroyed. Since 2014 ASF outbreak territory from east and south has expanded to the middle of Lithuania. Diagnosis by PCR is one of the highly recommended diagnostic methods by World Organization for Animal Health (OIE) for diagnosis of ASF. The aim of the present study was to compare singleplex real-time PCR assays to a duplex assay allowing the identification of ASF and internal control in a single PCR tube and to compare primers, that target the p72 gene (ASF 250 bp and ASF 75 bp) effectivity. Multiplex real-time PCR assays prove to be less time consuming and cost-efficient and therefore have a high potential to be applied in the routine analysis. It is important to have effective and fast method that allows virus detection at the beginning of disease for wild boar population and in outbreaks for domestic pigs. For experiments, we used reference samples (INIA, Spain), and positive samples from infected animals in Lithuania. Results show 100% sensitivity and specificity.

Keywords: African swine fewer, real-time PCR, wild boar, domestic pig

Procedia PDF Downloads 153
12809 Prevalence and Genetic Determinant of Drug Resistant Tuberculosis among Patients Completing Intensive Phase of Treatment in a Tertiary Referral Center in Nigeria

Authors: Aminu Bashir Mohammad, Agwu Ezera, Abdulrazaq G. Habib, Garba Iliyasu

Abstract:

Background: Drug resistance tuberculosis (DR-TB) continues to be a challenge in developing countries with poor resources. Routine screening for primary DR-TB before commencing treatment is not done in public hospitals in Nigeria, even with the large body of evidence that shows a high prevalence of primary DR-TB. Data on drug resistance and its genetic determinant among follow up TB patients is lacking in Nigeria. Hence the aim of this study was to determine the prevalence and genetic determinant of drug resistance among follow up TB patients in a tertiary hospital in Nigeria. Methods: This was a cross-sectional laboratory-based study conducted on 384 sputum samples collected from consented follow-up tuberculosis patients. Standard microbiology methods (Zeil-Nielsen staining and microscopy) and PCR (Line Probe Assay)] were used to analyze the samples collected. Person’s Chi-square was used to analyze the data generated. Results: Out of three hundred and eighty-four (384) sputum samples analyzed for mycobacterium tuberculosis (MTB) and DR-TB twenty-five 25 (6.5%) were found to be AFB positive. These samples were subjected to PCR (Line Probe Assay) out of which 18(72%) tested positive for DR-TB. Mutations conferring resistance to rifampicin (rpo B) and isoniazid (katG, and or inhA) were detected in 12/18(66.7%) and 6/18(33.3%), respectively. Transmission dynamic of DR-TB was not significantly (p>0.05) dependent on demographic characteristics. Conclusion: There is a need to strengthened the laboratory capacity for diagnosis of TB and drug resistance testing and make these services available, affordable, and accessible to the patients who need them.

Keywords: drug resistance tuberculosis, genetic determinant, intensive phase, Nigeria

Procedia PDF Downloads 270
12808 Using Seismic Base Isolation Systems in High-Rise Hospital Buildings and a Hybrid Proposal

Authors: Elif Bakkaloglu, Necdet Torunbalci

Abstract:

The fact of earthquakes in Turkiye is an inevitable natural disaster. Therefore, buildings must be prepared for this natural hazard. Especially in hospital buildings, earthquake resistance is an essential point because hospitals are one of the first places where people come after an earthquake. Although hospital buildings are more suitable for horizontal architecture, it is necessary to construct and expand multi-storey hospital buildings due to difficulties in finding suitable places as a result of excessive urbanization, difficulties in obtaining appropriate size land and decrease in suitable places and increase in land values. In Turkiye, using seismic isolators in public hospitals, which are placed in first-degree earthquake zone and have more than 100 beds, is made obligatory by general instruction. As a result of this decision, it may sometimes be necessary to construct seismic isolated multi-storey hospital buildings in cities where those problems are experienced. Although widespread use of seismic isolators in Japan, there are few multi-storey buildings in which seismic isolators are used in Turkiye. As it is known, base isolation systems are the most effective methods of earthquake resistance, as number of floors increases, center of gravity moves away from base in multi-storey buildings, increasing the overturning effect and limiting the use of these systems. In this context, it is aimed to investigate structural systems of multi-storey buildings which built using seismic isolation methods in the World. In addition to this, a working principle is suggested for disseminating seismic isolators in multi-storey hospital buildings. The results to be obtained from the study will guide architects who design multi-storey hospital buildings in their architectural designs and engineers in terms of structural system design.

Keywords: earthquake, energy absorbing systems, hospital, seismic isolation systems

Procedia PDF Downloads 126
12807 Homogenization of a Non-Linear Problem with a Thermal Barrier

Authors: Hassan Samadi, Mustapha El Jarroudi

Abstract:

In this work, we consider the homogenization of a non-linear problem in periodic medium with two periodic connected media exchanging a heat flux throughout their common interface. The interfacial exchange coefficient λ is assumed to tend to zero or to infinity following a rate λ=λ(ε) when the size ε of the basic cell tends to zero. Three homogenized problems are determined according to some critical value depending of λ and ε. Our method is based on Γ-Convergence techniques.

Keywords: variational methods, epiconvergence, homogenization, convergence technique

Procedia PDF Downloads 508
12806 Development of Vacuum Planar Membrane Dehumidifier for Air-Conditioning

Authors: Chun-Han Li, Tien-Fu Yang, Chen-Yu Chen, Wei-Mon Yan

Abstract:

The conventional dehumidification method in air-conditioning system mostly utilizes a cooling coil to remove the moisture in the air via cooling the supply air down below its dew point temperature. During the process, it needs to reheat the supply air to meet the set indoor condition that consumes a considerable amount of energy and affect the coefficient of performance of the system. If the processes of dehumidification and cooling are separated and operated respectively, the indoor conditions will be more efficiently controlled. Therefore, decoupling the dehumidification and cooling processes in heating, ventilation and air conditioning system is one of the key technologies as membrane dehumidification processes for the next generation. The membrane dehumidification method has the advantages of low cost, low energy consumption, etc. It utilizes the pore size and hydrophilicity of the membrane to transfer water vapor by mass transfer effect. The moisture in the supply air is removed by the potential energy and driving force across the membrane. The process can save the latent load used to condense water, which makes more efficient energy use because it does not involve heat transfer effect. In this work, the performance measurements including the permeability and selectivity of water vapor and air with the composite and commercial membranes were conducted. According to measured data, we can choose the suitable dehumidification membrane for designing the flow channel length and components of the planar dehumidifier. The vacuum membrane dehumidification system was set up to examine the effects of temperature, humidity, vacuum pressure, flow rate, the coefficient of performance and other parameters on the dehumidification efficiency. The results showed that the commercial Nafion membrane has better water vapor permeability and selectivity. They are suitable for filtration with water vapor and air. Meanwhile, Nafion membrane has promising potential in the dehumidification process.

Keywords: vacuum membrane dehumidification, planar membrane dehumidifier, water vapour and air permeability, air conditioning

Procedia PDF Downloads 129
12805 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements

Authors: M. A. García, J. Vinolas, A. Hernando

Abstract:

Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.

Keywords: magnetoelastic, magnetic induction, mechanical stress, steel

Procedia PDF Downloads 16
12804 Chemically Modified Chitosan Derivatives with Ameliorated Properties Appropriate for Drug Delivery

Authors: Georgia M. Michailidou, Nina-Maria S. Ainali, Eleftheria C. Xanthopoulou, Dimitrios N. Bikiaris

Abstract:

Polysaccharides are polymeric materials derived from nature. They are extensively used in pharmaceutical technology due to their low cost, their ready availability and their low toxicity. Chitosan is the product derived from the deacetylation of chitin usually obtained from arthropods. It is a linear polysaccharide which is composed of repeated units of N-deacetylated amino groups and some N-acetylated groups residues. Due to its excellent biological properties, it is an attractive natural polymer. It is biocompatible with low toxicity and complete biodegradability. Although it has excellent properties, the chemical modification of its structure results in new derivatives with ameliorated and more improved properties compared to the initial polymer. This is the exact purpose of the present study in which chitosan was modified with three different monomers, namely trans-aconitic acid, succinic anhydride and 2-hydroxyethyl acrylate. In chitosan’s modification with trans aconitic acid, EDC was utilized as an activator of the carboxylic groups of the monomer, and then a coupling reaction with the amino groups took place. Succinic anhydride reacted with chitosan through a ring opening reaction while 2-hydroxyethyl acrylate reacted through the addition of chitosan’s amino group to the double bond of the monomer. Through FTIR and NMR measurements the success of each reaction was confirmed, and the new structures of the derivatives were verified. X-ray diffraction was utilized in order to examine the effect of the modifications in chitosan’s crystallinity. Finally, swelling tests were conducted in order to assess the improved ability of the new polymeric materials to absorb water. Our results support the successful modification of chitosan’s macromolecular chains in all three reactions. Furthermore, the new derivatives appear to be amorphous concerning their crystallinity and have great ability in absorbing water.

Keywords: chitosan, derivatives, modification, polysaccharide

Procedia PDF Downloads 95
12803 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 308
12802 Suicide, Help-Seeking and LGBT Youth: A Mixed Methods Study

Authors: Elizabeth McDermott, Elizabeth Hughes, Victoria Rawlings

Abstract:

Globally, suicide is the second leading cause of death among 15–29 year-olds. Young people who identify as lesbian, gay, bisexual and transgender (LGBT) have elevated rates of suicide and self-harm. Despite the increased risk, there is a paucity of research on LGBT help-seeking and suicidality. This is the first national study to investigate LGBT youth help-seeking for suicidal feelings and self-harm. We report on a UK sequential exploratory mixed method study that employed face-to-face and online methods in two stages. Stage one involved 29 online (n=15) and face-to-face (n=14) semi-structured interviews with LGBT youth aged under 25 years old. Stage two utilized an online LGBT youth questionnaire employing a community-based sampling strategy (n=789). We found across the sample that LGBT youth who self-harmed or felt suicidal were reluctant to seek help. Results indicated that participants were normalizing their emotional distress and only asked for help when they reached crisis point and were no longer coping. Those who self-harmed (p<0.001, OR=2.82), had attempted or planned suicide (p<0.05, OR=1.48), or had experience of abuse related to their sexuality or gender (p<0.01, OR=1.80), were most likely to seek help. There were a number of interconnecting reasons that contributed to participants’ problems accessing help. The most prominent of these were: negotiating norms in relation to sexuality, gender, mental health and age; being unable to talk about emotions, and coping and self-reliance. It is crucial that policies and practices that aim to prevent LGBT youth suicide recognize that norms and normalizing processes connected to sexual orientation and gender identity are additional difficulties that LGBT youth have accessing mental health support.

Keywords: help-seeking, LGBT, suicide, youth

Procedia PDF Downloads 263
12801 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences

Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan

Abstract:

Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.

Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement

Procedia PDF Downloads 271
12800 The Effect of Reaction Time on the Morphology and Phase of Quaternary Ferrite Nanoparticles (FeCoCrO₄) Synthesised from a Single Source Precursor

Authors: Khadijat Olabisi Abdulwahab, Mohammad Azad Malik, Paul O'Brien, Grigore Timco, Floriana Tuna

Abstract:

The synthesis of spinel ferrite nanoparticles with a narrow size distribution is very crucial in their numerous applications including information storage, hyperthermia treatment, drug delivery, contrast agent in magnetic resonance imaging, catalysis, sensors, and environmental remediation. Ferrites have the general formula MFe₂O₄ (M = Fe, Co, Mn, Ni, Zn e.t.c) and possess remarkable electrical and magnetic properties which depend on the cations, method of preparation, size and their site occupancies. To the best of our knowledge, there are no reports on the use of a single source precursor to synthesise quaternary ferrite nanoparticles. Here in, we demonstrated the use of trimetallic iron pivalate cluster [CrCoFeO(O₂CᵗBu)₆(HO₂CᵗBu)₃] as a single source precursor to synthesise monodisperse cobalt chromium ferrite (FeCoCrO₄) nanoparticles by the hot injection thermolysis method. The precursor was thermolysed in oleylamine, oleic acid, with diphenyl ether as solvent at 260 °C. The effect of reaction time on the stoichiometry, phases or morphology of the nanoparticles was studied. The p-XRD patterns of the nanoparticles obtained after one hour was pure phase of cubic iron cobalt chromium ferrite (FeCoCrO₄). TEM showed that a more monodispersed spherical ferrite nanoparticles were obtained after one hour. Magnetic measurements revealed that the ferrite particles are superparamagnetic at room temperature. The nanoparticles were characterised by Powder X-ray Diffraction (p-XRD), Transmission Electron Microscopy (TEM), Energy Dispersive Spectroscopy (EDS) and Super Conducting Quantum Interference Device (SQUID).

Keywords: cobalt chromium ferrite, colloidal, hot injection thermolysis, monodisperse, reaction time, single source precursor, quaternary ferrite nanoparticles

Procedia PDF Downloads 290
12799 Compactness and Quality of Life: Applying Regression Analysis on American Cities

Authors: Hsi-Chuan Wang, Hongxi Yin

Abstract:

Compactness has been proposed as a type of sustainable urban form globally. However, the meanings and the measurements might diverse in regarding to the varying interpretation; moreover, since compactness was proposed to eliminate auto culture and urban sprawl in the developed countries, voices have emerged asking to rethink the suitability of compactness in the developing countries – based upon such understanding, Quality of Life (QOL) has been suggested as a good way to show the overall benefit of compactness. In regarding to such background, two subjects were targeted for discussion in this paper: (I) the meaning and feasibility of compactness between the developing and developed countries, and (II) the interaction between compactness and QOL. This paper argues that compactness should not be considered a universal principle for cities of all kind, but rather an ideal concept for urban designer and planner to consider throughout local practices. It firstly reviewed the benefits of both compactness and sprawl to uncover the features behind these urban forms, and later addressed the meaning and difficulty of adopting compactness in both the developing and developed countries. Secondly, arguing compactness to be positioned as a ‘process’ along the transition from the developing countries to the developed ones, this paper applied both cross-sectional and longitudinal analysis to uncover (I) the relationship between compactness and QOL in regarding to 30 American cities and (II) the impact of ‘becoming compact’ on QOL in regarding to 8 identified American Urbanized Areas (UZAs). The findings indicated that higher compactness could link to lower QOL among the compact cities, but with higher QOL among the sprawl cities. In addition, based upon the comparison between 2000 and 2010 on 8 UZAs, their QOL have escalated during the transition from the sprawl areas into the compact ones, but the extent of improvement in QOL could differ greatly among areas. In regarding to our findings, compact development should be proposed as a general guideline leading the contemporary sprawl cities in transition with sustainable urbanism; however, to prevent the externalities from damaging QOL with over-compactness, the compact policy should be flexible to adjust a long-term roadmap for sustainable development.

Keywords: compactness, quality of life, sprawl, sustainable urbanism

Procedia PDF Downloads 147
12798 Variability of Covariance of Selected Skeletal Diameters of Female in a Longitudinal Physical Training Programme

Authors: Dhananjoy Shaw, Seema Sharma (Kaushik)

Abstract:

Anthropometry helps in associating the physical properties of an individual with their racial, cultural, and psychological attributes. Numerous research studies have included different skeletal diameters as a variable. However, most of the studies suggest their inclusion describing specific characteristics/traits of the body. However, there seems to be a scarcity of literature related to the effect of any kind of longitudinal physical training on human skeletal diameters. Hence, the present investigation was conducted to study the variability of covariance of selected skeletal diameters of females in a longitudinal physical training programme. The sample for the study was 78 college going students of the University of Delhi, classified equally in three groups, i.e. viz. (a) Progressive load of training or conditioning group coded as PLT; (b) Constant load of training or non-conditioning group coded as CLT; and (c) No-load or control or sedentary group coded as NL. Collectively, mean age of the sample was 19.54±1.79 years. The randomly selected samples were given maximum consideration to maintain their homogeneity. The variables included biacromial diameter, biiliocristal diameter, bitrochantaerion diameter, humeral bicondylar, femoral bicondylar, wrist diameter, ankle diameter, and foot breadth. Multi-group repeated measure design was adopted for the experimentation. Each group was measured four times after completion of each of the three meso-cycles of six-weeks duration. The measurements were taken following the standard landmarks and procedures. Mean, standard deviation, analysis of co-variance and its post-hoc analysis were computed to analyze the data statistically. The study concluded that both the progressive and constant load of physical training bring changes in the selected skeletal diameters of females. It also reflected the increase due to growth also along with training.

Keywords: longitudinal, physical training, skeletal diameters, step progression load

Procedia PDF Downloads 113
12797 Current Epizootic Situation of Q Fever in Polish Cattle

Authors: Monika Szymańska-Czerwińska, Agnieszka Jodełko, Krzysztof Niemczuk

Abstract:

Q fever (coxiellosis) is an infectious disease of animals and humans causes by C. burnetii and widely distributed throughout the world. Cattle and small ruminants are commonly known as shedders of C. burnetii. The aims of this study were the evaluation of seroprevalence and shedding of C. burnetii in cattle. Genotypes of the pathogen present in the tested specimens were also identified using MLVA (Multiple Locus Variable-Number Tandem Repeat Analysis) and MST (multispacer sequence typing) methods. Sampling was conducted in different regions of Poland in 2018-2021. In total, 2180 bovine serum samples from 801 cattle herds were tested by ELISA (enzyme-linked immunosorbent assay). 489 specimens from 157 cattle herds such as: individual milk samples (n=407), bulk tank milk (n=58), vaginal swabs (n=20), placenta (n=3) and feces (n=1) were subjected to C. burnetii specific qPCR. The qPCR (IS1111 transposon-like repetitive region) was performed using Adiavet COX RealTime PCR kit. Genotypic characterization of the strains was conducted utilizing MLVA and MST methods. MLVA was performed using 6 variable loci. The overall herd-level seroprevalence of C. burnetii infection was 36.74% (801/2180). Shedders were detected in 29.3% (46/157) cattle herds in all tested regions. ST 61 sequence type was identified in 10 out of 18 genotyped strains. Interestingly one strain represents sequence type which has never been recorded previously. MLVA method identified three previously known genotypes: most common was J but also I and BE were recognized. Moreover, a one genotype has never been described previously. Seroprevalence and shedding of C. burnetii in cattle is common and strains are genetically diverse.

Keywords: Coxiella burnetii, cattle, MST, MLVA, Q fever

Procedia PDF Downloads 71
12796 Hybrid Bimodal Magnetic Force Microscopy

Authors: Fernández-Brito David, Lopez-Medina Javier Alonso, Murillo-Bracamontes Eduardo Antonio, Palomino-Ovando Martha Alicia, Gervacio-Arciniega José Juan

Abstract:

Magnetic Force Microscopy (MFM) is an Atomic Force Microscopy (AFM) technique that characterizes, at a nanometric scale, the magnetic properties of ferromagnetic materials. Conventional MFM works by scanning in two different AFM modes. The first one is tapping mode, in which the cantilever has short-range force interactions with the sample, with the purpose to obtain the topography. Then, the lift AFM mode starts, raising the cantilever to maintain a fixed distance between the tip and the surface of the sample, only interacting with the magnetic field forces of the sample, which are long-ranged. In recent years, there have been attempts to improve the MFM technique. Bimodal MFM was first theoretically developed and later experimentally proven. In bimodal MFM, the AFM internal piezoelectric is used to cause the cantilever oscillations in two resonance modes simultaneously, the first mode detects the topography, while the second is more sensitive to the magnetic forces between the tip and the sample. However, it has been proven that the cantilever vibrations induced by the internal AFM piezoelectric ceramic are not optimal, affecting the bimodal MFM characterizations. Moreover, the Secondary Resonance Magnetic Force Microscopy (SR-MFM) was developed. In this technique, a coil located below the sample generates an external magnetic field. This alternating magnetic field excites the cantilever at a second frequency to apply the Bimodal MFM mode. Nonetheless, for ferromagnetic materials with a low coercive field, the external field used in SR-MFM technique can modify the magnetic domains of the sample. In this work, a Hybrid Bimodal MFM (HB-MFM) technique is proposed. In HB-MFM, the bimodal MFM is used, but the first resonance frequency of the cantilever is induced by the magnetic field of the ferromagnetic sample due to its vibrations caused by a piezoelectric element placed under the sample. The advantages of this new technique are demonstrated through the preliminary results obtained by HB-MFM on a hard disk sample. Additionally, traditional two pass MFM and HB-MFM measurements were compared.

Keywords: magnetic force microscopy, atomic force microscopy, magnetism, bimodal MFM

Procedia PDF Downloads 55
12795 Relative Effectiveness of Inquiry: Approach and Expository Instructional Methods in Fostering Students’ Retention in Chemistry

Authors: Joy Johnbest Egbo

Abstract:

The study was designed to investigate the relative effectiveness of inquiry role approach and expository instructional methods in fostering students’ retention in chemistry. Two research questions were answered and three null hypotheses were formulated and tested at 0.05 level of significance. A quasi-experimental (the non-equivalent pretest, posttest control group) design was adopted for the study. The population for the study comprised all senior secondary school class two (SS II) students who were offering Chemistry in single sex schools in Enugu Education Zone. The instrument for data collection was a self-developed Chemistry Retention Test (CRT). Relevant data were collected from a sample of one hundred and forty–one (141) students drawn from two secondary schools (1 male and 1 female schools) using simple random sampling technique. A reliability co-efficient of 0.82 was obtained for the instrument using Kuder Richardson formular20 (K-R20). Mean and Standard deviation scores were used to answer the research questions while two–way analysis of covariance (ANCOVA) was used to test the hypotheses. The findings showed that the students taught with Inquiry role approach retained the chemistry concept significantly higher than their counterparts taught with expository method. Female students retained slightly higher than their male counterparts. There is significant interaction between instructional packages and gender on Chemistry students’ retention. It was recommended, among others, that teachers should be encouraged to employ the use of Inquiry-role approach more in the teaching of chemistry and other subjects in general. By so doing, students’ retention of the subject could be increased.

Keywords: inquiry role approach, retention, exposition method, chemistry

Procedia PDF Downloads 498
12794 Antagonistic Potential of Epiphytic Bacteria Isolated in Kazakhstan against Erwinia amylovora, the Causal Agent of Fire Blight

Authors: Assel E. Molzhigitova, Amankeldi K. Sadanov, Elvira T. Ismailova, Kulyash A. Iskandarova, Olga N. Shemshura, Ainur I. Seitbattalova

Abstract:

Fire blight is a very harmful for commercial apple and pear production quarantine bacterial disease. To date, several different methods have been proposed for disease control, including the use of copperbased preparations and antibiotics, which are not always reliable or effective. The use of bacteria as biocontrol agents is one of the most promising and eco-friendly alternative methods. Bacteria with protective activity against the causal agent of fire blight are often present among the epiphytic microorganisms of the phyllosphere of host plants. Therefore, the main objective of our study was screening of local epiphytic bacteria as possible antagonists against Erwinia amylovora, the causal agent of fire blight. Samples of infected organs of apple and pear trees (shoots, leaves, fruits) were collected from the industrial horticulture areas in various agro-ecological zones of Kazakhstan. Epiphytic microorganisms were isolated by standard and modified methods on specific nutrient media. The primary screening of selected microorganisms under laboratory conditions to determine the ability to suppress the growth of Erwinia amylovora was performed by agar-diffusion-test. Among 142 bacteria isolated from the fire blight host plants, 5 isolates, belonging to the genera Bacillus, Lactobacillus, Pseudomonas, Paenibacillus and Pantoea showed higher antagonistic activity against the pathogen. The diameters of inhibition zone have been depended on the species and ranged from 10 mm to 48 mm. The maximum diameter of inhibition zone (48 mm) was exhibited by B. amyloliquefaciens. Less inhibitory effect was showed by Pantoea agglomerans PA1 (19 mm). The study of inhibitory effect of Lactobacillus species against E. amylovora showed that among 7 isolates tested only one (Lactobacillus plantarum 17M) demonstrated inhibitory zone (30 mm). In summary, this study was devoted to detect the beneficial epiphytic bacteria from plants organs of pear and apple trees due to fire blight control in Kazakhstan. Results obtained from the in vitro experiments showed that the most efficient bacterial isolates are Lactobacillus plantarum 17M, Bacillus amyloliquefaciens MB40, and Pantoea agglomerans PA1. These antagonists are suitable for development as biocontrol agents for fire blight control. Their efficacies will be evaluated additionally, in biological tests under in vitro and field conditions during our further study.

Keywords: antagonists, epiphytic bacteria, Erwinia amylovora, fire blight

Procedia PDF Downloads 150
12793 Symmetry of Performance across Lower Limb Tests between the Dominant and Non-Dominant Legs

Authors: Ghulam Hussain, Herrington Lee, Comfort Paul, Jones Richard

Abstract:

Background: To determine the functional limitations of the lower limbs or readiness to return to sport, most rehabilitation programs use some form of testing; however, it is still unknown what the pass criteria is. This study aims to investigate the differences between the dominant and non-dominant leg performances across several lower limb tasks, which are hop tests, two-dimensional (2D) frontal plane projection angle (FPPA) tests, and isokinetic muscle tests. This study also provides the reference values for the limb symmetry index (LSI) for the hop and isokinetic muscle strength tests. Twenty recreationally active participants were recruited, 11 males and 9 females (age 23.65±2.79 years; height 169.9±3.74 cm; and body mass 74.72±5.81 kg. All tests were undertaken on the dominant and non-dominant legs. These tests are (1) Hop tests, which include horizontal hop for distance and crossover hop tests, (2) Frontal plane projection angle (FPPA): 2D capturing from two different tasks, which are forward hop landing and squatting, and (3) Isokinetic muscle strength tests: four different muscles were tested: quadriceps, hamstring, ankle plantar flexor, and hip extensor muscles. The main outcome measurements were, for the (1) hop tests: maximum distance was taken when undertaking single/crossover hop for distance using a standard tape measure, (2) for the FPPA: the knee valgus angle was measured from the maximum knee flexion position using a single 2D camera, and (3) for the isokinetic muscle strength tests: three different variables were measured: peak torque, peak torque to body weight, and the total work to body weight. All the muscle strength tests have been applied in both concentric and eccentric muscle actions at a speed of 60°/sec. This study revealed no differences between the dominant and non-dominant leg performance, and 85% of LSI was achieved by the majority of the subjects in both hop and isokinetic muscle tests, and; therefore, one leg’s hop performance can define the other.

Keywords: 2D FPPA, hop tests, isokinetic testing, LSI

Procedia PDF Downloads 48
12792 Compost Bioremediation of Oil Refinery Sludge by Using Different Manures in a Laboratory Condition

Authors: O. Ubani, H. I. Atagana, M. S. Thantsha

Abstract:

This study was conducted to measure the reduction in polycyclic aromatic hydrocarbons (PAHs) content in oil sludge by co-composting the sludge with pig, cow, horse and poultry manures under laboratory conditions. Four kilograms of soil spiked with 800 g of oil sludge was co-composted differently with each manure in a ratio of 2:1 (w/w) spiked soil:manure and wood-chips in a ratio of 2:1 (w/v) spiked soil:wood-chips. Control was set up similar as the one above but without manure. Mixtures were incubated for 10 months at room temperature. Compost piles were turned weekly and moisture level was maintained at between 50% and 70%. Moisture level, pH, temperature, CO2 evolution and oxygen consumption were measured monthly and the ash content at the end of experimentation. Bacteria capable of utilizing PAHs were isolated, purified and characterized by molecular techniques using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE), amplification of the 16S rDNA gene using the specific primers (16S-P1 PCR and 16S-P2 PCR) and the amplicons were sequenced. Extent of reduction of PAHs was measured using automated soxhlet extractor with dichloromethane as the extraction solvent coupled with gas chromatography/mass spectrometry (GC/MS). Temperature did not exceed 27.5O°C in all compost heaps, pH ranged from 5.5 to 7.8 and CO2 evolution was highest in poultry manure at 18.78 µg/dwt/day. Microbial growth and activities were enhanced. Bacteria identified were Bacillus, Arthrobacter and Staphylococcus species. Results from PAH measurements showed reduction between 77 and 99%. The results from the control experiments may be because it was invaded by fungi. Co-composting of spiked soils with animal manures enhanced the reduction in PAHs. Interestingly, all bacteria isolated and identified in this study were present in all treatments, including the control.

Keywords: bioremediation, co-composting, oil refinery sludge, PAHs, bacteria spp, animal manures, molecular techniques

Procedia PDF Downloads 457
12791 Determination of Klebsiella Pneumoniae Susceptibility to Antibiotics Using Infrared Spectroscopy and Machine Learning Algorithms

Authors: Manal Suleiman, George Abu-Aqil, Uraib Sharaha, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman, Mahmoud Huleihel

Abstract:

Klebsiella pneumoniae is one of the most aggressive multidrug-resistant bacteria associated with human infections resulting in high mortality and morbidity. Thus, for an effective treatment, it is important to diagnose both the species of infecting bacteria and their susceptibility to antibiotics. Current used methods for diagnosing the bacterial susceptibility to antibiotics are time-consuming (about 24h following the first culture). Thus, there is a clear need for rapid methods to determine the bacterial susceptibility to antibiotics. Infrared spectroscopy is a well-known method that is known as sensitive and simple which is able to detect minor biomolecular changes in biological samples associated with developing abnormalities. The main goal of this study is to evaluate the potential of infrared spectroscopy in tandem with Random Forest and XGBoost machine learning algorithms to diagnose the susceptibility of Klebsiella pneumoniae to antibiotics within approximately 20 minutes following the first culture. In this study, 1190 Klebsiella pneumoniae isolates were obtained from different patients with urinary tract infections. The isolates were measured by the infrared spectrometer, and the spectra were analyzed by machine learning algorithms Random Forest and XGBoost to determine their susceptibility regarding nine specific antibiotics. Our results confirm that it was possible to classify the isolates into sensitive and resistant to specific antibiotics with a success rate range of 80%-85% for the different tested antibiotics. These results prove the promising potential of infrared spectroscopy as a powerful diagnostic method for determining the Klebsiella pneumoniae susceptibility to antibiotics.

Keywords: urinary tract infection (UTI), Klebsiella pneumoniae, bacterial susceptibility, infrared spectroscopy, machine learning

Procedia PDF Downloads 147
12790 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 247
12789 Characterization of Electrical Transport across Ultra-Thin SrTiO₃ and BaTiO₃ Barriers in Tunnel Junctions

Authors: Henry Navarro, Martin Sirena, Nestor Haberkorn

Abstract:

We report the electrical transport through voltage-current curves (I-V) in tunnels junction GdBa₂Cu₃O₇-d/ insulator/ GdBa₂Cu₃O₇-d, and Nb/insulator/ GdBa₂Cu₃O₇-d is analyzed using a conducting atomic force microscope (CAFM) at room temperature. The measurements were obtained on tunnel junctions with different areas (900 μm², 400 μm² and 100 μm²). Trilayers with GdBa₂Cu₃O₇-d (GBCO) as the bottom electrode, SrTiO₃ (STO) or BaTiO₃ (BTO) as the insulator barrier (thicknesses between 1.6 nm and 4 nm), and GBCO or Nb as the top electrode were grown by DC sputtering on (100) SrTiO₃ substrates. For STO and BTO barriers, asymmetric IV curves at positive and negative polarization can be obtained using electrodes with different work function. The main difference is that the BTO is a ferroelectric material, while in the STO the ferroelectricity can be produced by stress or deformation at the interfaces. In addition, hysteretic IV curves are obtained for BTO barriers, which can be ascribed to a combined effect of the FE reversal switching polarization and an oxygen vacancy migration. For GBCO/ BTO/ GBCO heterostructures, the IV curves correspond to that expected for asymmetric interfaces, which indicates that the disorder affects differently the properties at the bottom and top interfaces. Our results show the role of the interface disorder on the electrical transport of conducting/ insulator/ conduction heterostructures, which is relevant for different applications, going from resistive switching memories (at room temperature) to Josephson junctions (at low temperatures). The superconducting transition of the GBCO electrode was characterized by electrical transport using the 4-prong configuration with low density of topological defects and with Tc over liquid N₂ can be obtained for thicknesses of 16 nm, our results demonstrate that GBCO films with an average root-mean-square (RMS) smaller than 1 nm and areas (up 100 um²) free of 3-D topological defects can be obtained.

Keywords: thin film, sputtering, conductive atomic force microscopy, tunnel junctions

Procedia PDF Downloads 144
12788 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado

Authors: Masahiro Suzuki, Nobuyuki Okura

Abstract:

The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.

Keywords: aerodynamic force, experimental method, tornado, train

Procedia PDF Downloads 222